Search Results

Search found 36186 results on 1448 pages for 'sql 11'.

Page 178/1448 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Entity Framework VS LINQ to SQL VS ADO.NET with stored procedures?

    - by BritishDeveloper
    How would you rate each of them in terms of: Performance Speed of development Neat, intuitive, maintainable code Flexibility Overall I like my SQL and so have always been a die-hard fan of ADO.NET and stored procedures but I recently had a play with Linq to SQL and was blown away by how quickly I was writing out my DataAccess layer and have decided to spend some time really understanding either Linq to SQL or EF... or neither? I just want to check, that there isn't a great flaw in any of these technologies that would render my research time useless. E.g. performance is terrible, it's cool for simple apps but can only take you so far

    Read the article

  • How can I save the schema of a SQL Database to a file?

    - by Eric
    I'm writing a software application in C#.Net that connects to a SQL Server database. My C# project is under SVN version control, but I'd like to include my database schema in the SVN repository as well. An answer to a previous question of mine suggested storing the scripts to generate the database in version control. Is there a way to automatically generate these scripts from an existing database? I'm very new to SQL Server, but I noticed in management studio that the SQL commands to create a table can be generated automatically by right clicking on the table and clicking "Script Table As". Is there an equivalent command that would work with the entire database?

    Read the article

  • How do I get a count of events each day with SQL?

    - by upl8
    I have a table that looks like this: Timestamp Event User ================ ===== ===== 1/1/2010 1:00 PM 100 John 1/1/2010 1:00 PM 103 Mark 1/2/2010 2:00 PM 100 John 1/2/2010 2:05 PM 100 Bill 1/2/2010 2:10 PM 103 Frank I want to write a query that shows the events for each day and a count for those events. Something like: Date Event EventCount ======== ===== ========== 1/1/2010 100 1 1/1/2010 103 1 1/2/2010 100 2 1/2/2010 103 1 The database is SQL Server Compact, so it doesn't support all the features of the full SQL Server. The query I have written so far is SELECT DATEADD(dd, DATEDIFF(dd, 0, Timestamp), 0) as Date, Event, Count(Event) as EventCount FROM Log GROUP BY Timestamp, Event This almost works, but EventCount is always 1. How can I get SQL Server to return the correct counts? All fields are mandatory.

    Read the article

  • need help with db-query on sql-server 2005.

    - by Avinash
    We're seeing strange behavior when running two versions of a query on SQL Server 2005: version A: SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = 1234 ORDER BY name ASC version B: DECLARE @Id AS INT; SET @Id = 1234; SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @Id ORDER BY name ASC Both queries return 1000 rows; version A takes on average 15s; version B on average takes 4s. Could anyone help us understand the difference in execution times of these two versions of SQL? If we invoke this query via named parameters using NHibernate, we see the following query via SQL Server profiler: EXEC sp_executesql N'SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @id ORDER BY name ASC', N'@id INT', @id=1234; ...and this tends to perform as badly as version A. Thanks in advance,

    Read the article

  • Is there a combination of "LIKE" and "IN" in SQL?

    - by Techpriester
    Hi folks. In SQL I (sadly) often have to use "LIKE" conditions due to databases that violate nearly every rule of normalization. I can't change that right now. But that's irrelevant to the question. Further, I often use conditions like WHERE something in (1,1,2,3,5,8,13,21) for better readability and flexibility of my SQL statements. Is there any possible way to combine these two things without writing complicated sub-selects? I want something as easy as WHERE something LIKE ('bla%', '%foo%', 'batz%') instead of WHERE something LIKE 'bla%' OR something LIKE '%foo%' OR something LIKE 'batz%' I'm working with MS SQl Server and Oracle here but I'm interested if this is possible in any RDBMS at all.

    Read the article

  • How to modify the Title Bar text for SQL Server Management Studio?

    - by DaveDev
    Sometimes I keep multiple instances of SQL Server Management Studio 2005 open. I might have the dev database open in one, and the production database open in another. These appear in the Windows task bar with the text "Microsoft SQL Serve...", which means it's impossible to differentiate between them unless I open the window and scroll the Object Explorer up to see what server the window is actually connected to. Is ther any way that I can get the window to display the server name first, and then the name of the application? Like "Dev-DB.database_name - Microsoft SQL Serve..." or whatever?

    Read the article

  • T-SQL - Is there a (free) way to compare data in two tables?

    - by RPM1984
    Okay so i have table a and table b. (SQL Server 2008) Both tables have the exact same schema. For the purposes of this question, consider table a = my local dev table, table b = the live table. I need to create a SQL script (containing UPDATE/DELETE/INSERT statements) that will update table b to be the same as table a. This script will then be deployed to the live database. Any free tools out there that can do this, or better yet a way i can do it myself? I'm thinking i probably need to do some type of a join on all the fields in the tables, then generate dynamic sql based on that. Anyone have any ideas?

    Read the article

  • How to force SQL Server 2008 to not change AUTOINC_NEXT value when IDENTITY_INSERT is ON ?

    - by evilek
    Hello, I got question about IDENTITY_INSERT. When you change it to ON, SQL Server automatically changes AUTOINC_NEXT value to the last inserted value as identity. So if you got only one row with ID = 1 and insert row with ID = 100 while IDENTITY_INSERT is ON then next inserting row will have ID = 101. I'd like it to be 2 without need to reseed. Such behaviour already exists in SQL Server Compact 3.5. Is it possible to force SQL Server 2008 to not change AUTOINC_NEXT value while doing insert with IDENTITY_INSERT = ON ?

    Read the article

  • Is it possible to password protect an SQL server database even from administrators of the server ?

    - by imanabidi
    I want to install an application (ASP.Net + SQL server 2005 express) in local network of some small company for demo but I also want nobody even sysadmin see anything direct from the database and any permission wants a secure pass . I need to spend more time on this article Database Encryption in SQL Server 2008 Enterprise Edition that i found from this answer is-it-possible-to-password-protect-an-sql-server-database but 1.I like to be sure and more clear on this because the other answer in this page says : Yes. you can protect it from everyone except the administrators of the server. 2.if this is possible, the db have to be enterprise edition ? 3.is there any other possible solutions and workaround for this? thanks in advance

    Read the article

  • Why SQL Server Express 2008 install requires Visual Studio 2008 in checklist ?

    - by asksuperuser
    When installing SQL Server Express Edition 2008, checklist says "Previous version of Visual Studio 2008" and asked me to upgrade to sp1. Unfortunately sp1 for some reason refuses to install on my brand new pc (Windows 7). So why can't I just bypass this ? Why would SQL Server Express needs VS2008 to install that's insane. SQL Server install used to be as easy as 123, now it has become a nightmare like installing Oracle. Will I have to go back to Windows XP ?

    Read the article

  • Linq to SQL DateTime values are local (Kind=Unspecified) - How do I make it UTC?

    - by ericsson007
    Isn't there a (simple) way to tell Linq To SQL classes that a particular DateTime property should be considered as UTC (i.e. having the Kind property of the DateTime type to be Utc by default), or is there a 'clean' workaround? The time zone on my app-server is not the same as the SQL 2005 Server (cannot change any), and none is UTC. When I persist a property of type DateTime to the dB I use the UTC value (so the value in the db column is UTC), but when I read the values back (using Linq To SQL) I get the .Kind property of the DateTime value to be 'Unspecified'. The problem is that when I 'convert' it to UTC it is 4 hours off. This also means that when it is serialized it it ends up on the client side with a 4 hour wrong offset (since it is serialized using the UTC).

    Read the article

  • AWS S3 status code 403 SignatureDoesNotMatch - Check your key and signing method

    - by Matt
    i have checked a rechecked my keys and they are correct but i keep the below Exception whenever i try to upload, can anyone shed some light on this problem? 11-01 09:21:26.331: W/System.err(13934): AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: B97CC81E13D81XXX, AWS Error Code: SignatureDoesNotMatch, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method., S3 Extended Request ID: PPieuQpqElIizNBQPc42JwC4WnBkUciCKRT5S4HSftBraeacN8y0lKfsVP9LXXXX 11-01 09:21:26.331: W/System.err(13934): at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:659) 11-01 09:21:26.331: W/System.err(13934): at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:330) 11-01 09:21:26.331: W/System.err(13934): at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:182) 11-01 09:21:26.331: W/System.err(13934): at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3038) 11-01 09:21:26.331: W/System.err(13934): at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1218) 11-01 09:21:26.331: W/System.err(13934): at com.apprssd.capsule.S3UploaderActivity$S3PutObjectTask.doInBackground(S3UploaderActivity.java:165) 11-01 09:21:26.331: W/System.err(13934): at com.apprssd.capsule.S3UploaderActivity$S3PutObjectTask.doInBackground(S3UploaderActivity.java:1) 11-01 09:21:26.331: W/System.err(13934): at android.os.AsyncTask$2.call(AsyncTask.java:287) 11-01 09:21:26.331: W/System.err(13934): at java.util.concurrent.FutureTask.run(FutureTask.java:234) 11-01 09:21:26.341: W/System.err(13934): at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) 11-01 09:21:26.341: W/System.err(13934): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) 11-01 09:21:26.341: W/System.err(13934): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) 11-01 09:21:26.341: W/System.err(13934): at java.lang.Thread.run(Thread.java:841) It happens during the below. PutObjectRequest por = new PutObjectRequest( Constants.getDataBucket(), selectedZip.toString(), selectedZip); s3Client.putObject(por);

    Read the article

  • mysql whats wrong with this query?

    - by Hailwood
    I'm trying to write a query that selects from four tables campaignSentParent csp campaignSentEmail cse campaignSentFax csf campaignSentSms css Each of the cse, csf, and css tables are linked to the csp table by csp.id = (cse/csf/css).parentId The csp table has a column called campaignId, What I want to do is end up with rows that look like: | id | dateSent | emailsSent | faxsSent | smssSent | | 1 | 2011-02-04 | 139 | 129 | 140 | But instead I end up with a row that looks like: | 1 | 2011-02-03 | 2510340 | 2510340 | 2510340 | Here is the query I am trying SELECT csp.id id, csp.dateSent dateSent, COUNT(cse.parentId) emailsSent, COUNT(csf.parentId) faxsSent, COUNT(css.parentId) smsSent FROM campaignSentParent csp, campaignSentEmail cse, campaignSentFax csf, campaignSentSms css WHERE csp.campaignId = 1 AND csf.parentId = csp.id AND cse.parentId = csp.id AND css.parentId = csp.id; Adding GROUP BY did not help, so I am posting the create statements. csp CREATE TABLE `campaignsentparent` ( `id` int(11) NOT NULL AUTO_INCREMENT, `campaignId` int(11) NOT NULL, `dateSent` datetime NOT NULL, `account` int(11) NOT NULL, `status` varchar(15) NOT NULL DEFAULT 'Creating', PRIMARY KEY (`id`) ) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1 cse/csf (same structure, different names) CREATE TABLE `campaignsentemail` ( `id` int(11) NOT NULL AUTO_INCREMENT, `parentId` int(11) NOT NULL, `contactId` int(11) NOT NULL, `content` text, `subject` text, `status` varchar(15) DEFAULT 'Pending', PRIMARY KEY (`id`) ) ENGINE=MyISAM AUTO_INCREMENT=140 DEFAULT CHARSET=latin1 css CREATE TABLE `campaignsentsms` ( `id` int(11) NOT NULL AUTO_INCREMENT, `parentId` int(11) NOT NULL, `contactId` int(11) NOT NULL, `content` text, `status` varchar(15) DEFAULT 'Pending', PRIMARY KEY (`id`) ) ENGINE=MyISAM AUTO_INCREMENT=141 DEFAULT CHARSET=latin1

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 1)

    - by Sadequl Hussain
    It is a fact of life: SQL Server databases change homes. They move from one instance to another, from one version to the next, from old servers to new ones.  They move around as an organisation’s data grows, applications are enhanced or new versions of the database software are released. If not anything else, servers become old and unreliable and databases eventually need to find a new home. Consider the following scenarios: 1.     A new  database application is rolled out in a production server from the development or test environment 2.     A copy of the production database needs to be installed in a test server for troubleshooting purposes 3.     A copy of the development database is regularly refreshed in a test server during the system development life cycle 4.     A SQL Server is upgraded to a newer version. This can be an in-place upgrade or a side-by-side migration 5.     One or more databases need to be moved between different instances as part of a consolidation strategy. The instances can be running the same or different version of SQL Server 6.     A database has to be restored from a backup file provided by a third party application vendor 7.     A backup of the database is restored in the same or different instance for disaster recovery 8.     A database needs to be migrated within the same instance: a.     Files are moved from direct attached storage to storage area network b.    The same database is copied under a different name for another application Migrating SQL Server database applications is a complex topic in itself. There are a number of components that can be involved: jobs, DTS or SSIS packages, logins or linked servers are only few pieces of the puzzle. However, in this article we will focus only on the central part of migration: the installation of the database itself. Unless it is an in-place upgrade, typically the database is taken from a source server and installed in a destination instance.  Most of the time, a full backup file is used for the rollout. The backup file is either provided to the DBA or the DBA takes the backup and restores it in the target server. Sometimes the database is detached from the source and the files are copied to and attached in the destination. Regardless of the method of copying, moving, refreshing, restoring or upgrading the physical database, there are a number of steps the DBA should follow before and after it has been installed in the destination. It is these post database installation steps we are going to discuss below. Some of these steps apply in almost every scenario described above while some will depend on the type of objects contained within the database.  Also, the principles hold regardless of the number of databases involved. Step 1:  Make a copy of data and log files when attaching and detaching When detaching and attaching databases, ensure you have made copies of the data and log files if the destination is running a newer version of SQL Server. This is because once attached to a newer version, the database cannot be detached and attached back to an older version. Trying to do so will give you a message like the following: Server: Msg 602, Level 21, State 50, Line 1 Could not find row in sysindexes for database ID 6, object ID 1, index ID 1. Run DBCC CHECKTABLE on sysindexes. Connection Broken If you try to backup the attached database and restore it in the source, it will still fail. Similarly, if you are restoring the database in a newer version, it cannot be backed up or detached and put back in an older version of SQL. Unlike detach and attach method though, you do not lose the backup file or the original database here. When detaching and attaching a database, it is important you keep all the log files available along with the data files. It is possible to attach a database without a log file and SQL Server can be instructed to create a new log file, however this does not work if the database was detached when the primary file group was read-only. You will need all the log files in such cases. Step 2: Change database compatibility level Once the database has been restored or attached to a newer version of SQL Server, change the database compatibility level to reflect the newer version unless there is a compelling reason not to do so. When attaching or restoring from a previous version of SQL, the database retains the older version’s compatibility level.  The only time you would want to keep a database with an older compatibility level is when the code within your database is no longer supported by SQL Server. For example, outer joins with *= or the =* operators were still possible in SQL 2000 (with a warning message), but not in SQL 2005 anymore. If your stored procedures or triggers are using this form of join, you would want to keep the database with an older compatibility level.  For a list of compatibility issues between older and newer versions of SQL Server databases, refer to the Books Online under the sp_dbcmptlevel topic. Application developers and architects can help you in deciding whether you should change the compatibility level or not. You can always change the compatibility mode from the newest to an older version if necessary. To change the compatibility level, you can either use the database’s property from the SQL Server Management Studio or use the sp_dbcmptlevel stored procedure.   Bear in mind that you cannot run the built-in reports for databases from SQL Server Management Studio if you keep the database with an older compatibility level. The following figure shows the error message I received when trying to run the “Disk Usage by Top Tables” report against a database. This database was hosted in a SQL Server 2005 system and still had a compatibility mode 80 (SQL 2000).     Continues…

    Read the article

  • SQL Server Developer Tools &ndash; Codename Juneau vs. Red-Gate SQL Source Control

    - by Ajarn Mark Caldwell
    So how do the new SQL Server Developer Tools (previously code-named Juneau) stack up against SQL Source Control?  Read on to find out. At the PASS Community Summit a couple of weeks ago, it was announced that the previously code-named Juneau software would be released under the name of SQL Server Developer Tools with the release of SQL Server 2012.  This replacement for Database Projects in Visual Studio (also known in a former life as Data Dude) has some great new features.  I won’t attempt to describe them all here, but I will applaud Microsoft for making major improvements.  One of my favorite changes is the way database elements are broken down.  Previously every little thing was in its own file.  For example, indexes were each in their own file.  I always hated that.  Now, SSDT uses a pattern similar to Red-Gate’s and puts the indexes and keys into the same file as the overall table definition. Of course there are really cool features to keep your database model in sync with the actual source scripts, and the rename refactoring feature is now touted as being more than just a search and replace, but rather a “semantic-aware” search and replace.  Funny, it reminds me of SQL Prompt’s Smart Rename feature.  But I’m not writing this just to criticize Microsoft and argue that they are late to the party with this feature set.  Instead, I do see it as a viable alternative for folks who want all of their source code to be version controlled, but there are a couple of key trade-offs that you need to know about when you choose which tool set to use. First, the basics Both tool sets integrate with a wide variety of source control systems including the most popular: Subversion, GIT, Vault, and Team Foundation Server.  Both tools have integrated functionality to produce objects to upgrade your target database when you are ready (DACPACs in SSDT, integration with SQL Compare for SQL Source Control).  If you regularly live in Visual Studio or the Business Intelligence Development Studio (BIDS) then SSDT will likely be comfortable for you.  Like BIDS, SSDT is a Visual Studio Project Type that comes with SQL Server, and if you don’t already have Visual Studio installed, it will install the shell for you.  If you already have Visual Studio 2010 installed, then it will just add this as an available project type.  On the other hand, if you regularly live in SQL Server Management Studio (SSMS) then you will really enjoy the SQL Source Control integration from within SSMS.  Both tool sets store their database model in script files.  In SSDT, these are on your file system like other source files; in SQL Source Control, these are stored in the folder structure in your source control system, and you can always GET them to your file system if you want to browse them directly. For me, the key differentiating factors are 1) a single, unified check-in, and 2) migration scripts.  How you value those two features will likely make your decision for you. Unified Check-In If you do a continuous-integration (CI) style of development that triggers an automated build with unit testing on every check-in of source code, and you use Visual Studio for the rest of your development, then you will want to really consider SSDT.  Because it is just another project in Visual Studio, it can be added to your existing Solution, and you can then do a complete, or unified single check-in of all changes whether they are application or database changes.  This is simply not possible with SQL Source Control because it is in a different development tool (SSMS instead of Visual Studio) and there is no way to do one unified check-in between the two.  You CAN do really fast back-to-back check-ins, but there is the possibility that the automated build that is triggered from the first check-in will cause your unit tests to fail and the CI tool to report that you broke the build.  Of course, the automated build that is triggered from the second check-in which contains the “other half” of your changes should pass and so the amount of time that the build was broken may be very, very short, but if that is very, very important to you, then SQL Source Control just won’t work; you’ll have to use SSDT. Refactoring and Migrations If you work on a mature system, or on a not-so-mature but also not-so-well-designed system, where you want to refactor the database schema as you go along, but you can’t have data suddenly disappearing from your target system, then you’ll probably want to go with SQL Source Control.  As I wrote previously, there are a number of changes which you can make to your database that the comparison tools (both from Microsoft and Red Gate) simply cannot handle without the possibility (or probability) of data loss.  Currently, SSDT only offers you the ability to inject PRE and POST custom deployment scripts.  There is no way to insert your own script in the middle to override the default behavior of the tool.  In version 3.0 of SQL Source Control (Early Access version now available) you have that ability to create your own custom migration script to take the place of the commands that the tool would have done, and ensure the preservation of your data.  Or, even if the default tool behavior would have worked, but you simply know a better way then you can take control and do things your way instead of theirs. You Decide In the environment I work in, our automated builds are not triggered off of check-ins, but off of the clock (currently once per night) and so there is no point at which the automated build and unit tests will be triggered without having both sides of the development effort already checked-in.  Therefore having a unified check-in, while handy, is not critical for us.  As for migration scripts, these are critically important to us.  We do a lot of new development on systems that have already been in production for years, and it is not uncommon for us to need to do a refactoring of the database.  Because of the maturity of the existing system, that often involves data migrations or other additional SQL tasks that the comparison tools just can’t detect on their own.  Therefore, the ability to create a custom migration script to override the tool’s default behavior is very important to us.  And so, you can see why we will continue to use Red Gate SQL Source Control for the foreseeable future.

    Read the article

  • ubuntu ssh does not connect

    - by bocca
    SSH won't be able to establish a connection to our server Here's the output of ssh -vvv: ssh -v -v -v 11.11.11.11 OpenSSH_5.1p1 Debian-6ubuntu2, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 11.11.11.11 [11.11.11.11] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/identity type -1 debug1: identity file /root/.ssh/id_rsa type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.1p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-cbc hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 133/256 debug2: bits set: 486/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /root/.ssh/known_hosts debug3: check_host_in_hostfile: match line 1 debug1: Host '11.11.11.11' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:1 debug2: bits set: 497/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /root/.ssh/identity ((nil)) debug2: key: /root/.ssh/id_rsa ((nil)) debug2: key: /root/.ssh/id_dsa ((nil)) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,gssapi,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug3: no such identity: /root/.ssh/identity debug1: Trying private key: /root/.ssh/id_rsa debug3: no such identity: /root/.ssh/id_rsa debug1: Trying private key: /root/.ssh/id_dsa debug3: no such identity: /root/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password root@11.11.11.11's password: debug3: packet_send2: adding 64 (len 57 padlen 7 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug3: tty_make_modes: ospeed 38400 debug3: tty_make_modes: ispeed 38400 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env SSH_AGENT_PID debug3: Ignored env SHELL debug3: Ignored env TERM debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env GTK_RC_FILES debug3: Ignored env WINDOWID debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env GNOME_KEYRING_SOCKET debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env USERNAME debug3: Ignored env SESSION_MANAGER debug3: Ignored env MAIL debug3: Ignored env PATH debug3: Ignored env DESKTOP_SESSION debug3: Ignored env PWD debug3: Ignored env GDM_KEYBOARD_LAYOUT debug3: Ignored env GNOME_KEYRING_PID debug1: Sending env LANG = en_CA.UTF-8 debug2: channel 0: request env confirm 0 debug3: Ignored env GDM_LANG debug3: Ignored env GDMSESSION debug3: Ignored env HISTCONTROL debug3: Ignored env SPEECHD_PORT debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env XDG_DATA_DIRS debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env DISPLAY debug3: Ignored env LESSCLOSE debug3: Ignored env XAUTHORITY debug3: Ignored env COLORTERM debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_confirm: type 99 id 0 debug2: shell request accepted on channel 0

    Read the article

  • Announcing the new Oracle Retail Workspace, A Configuration of Oracle WebCenter Spaces 11.1.1.5 for Oracle Retail

    - by Oracle Retail Documentation Team
    For the Oracle Retail 13.2.x enterprise, Oracle Retail Workspace 13.2.4 replaces previous versions of Oracle Retail Workspace. Oracle Retail Workspace 13.2.4 is a supported configuration of Oracle WebCenter Spaces 11.1.1.5 for Oracle Retail. Supported Product Overview In order to provide a next-generation Oracle user engagement platform for the retail industry, Oracle Retail Workspace leverages WebCenter Spaces. Oracle Retail Workspace is not a licensed retail application with any code. Instead, retailers purchase the underlying technology and then leverage the Oracle Retail Workspace Implementation Guide to configure a portal utilizing Oracle WebCenter Spaces. Oracle Retail Workspace has been repositioned as a configuration of Oracle WebCenter Spaces for the following reasons: The Oracle Retail Workspace configuration utilizes the external application functionality and the application navigator taskflow of the Oracle WebCenter Framework to configure Oracle Retail applications in Oracle WebCenter Spaces. The Oracle WebCenter Framework improves IT development cycle times by blending Web 2.0 services, processes, business intelligence, and transactions in an integrated JSF framework. The Oracle WebCenter Spaces 11g offers features provided by the previous versions of Oracle Retail Workspace that enable retailers to leverage a productive portal-based environment. List of Documents The following are included in Workspace 13.2.4, A Configuration of WebCenter Spaces 11.1.1.5 for Oracle Retail Oracle Retail Workspace Release Notes Oracle Retail Workspace Implementation Guide Workspace Retail Library—Unsupported The Oracle Retail Workspace Retail Library is comprised of previously-published accelerator documents and sample code downloads hosted on My Oracle Support. They are not supported, nor are they associated with the support lifecycle of the Workspace application. Doc ID: 1461281.1: Oracle Retail Workspace Retail Library Oracle Retail Workspace Retail Library Reference GuideA set of Micro-Applications that can be used to perform some of the operations of Oracle Retail Merchandising System (RMS) from outside the application. This document describes the functional and technical design details of the Micro-Applications available in this release, including the following and more: Create Regular Item Create Purchase Order Item Transfer Update Vendor Oracle Retail Fashion Planning Bundle Reports documentationThe Oracle Retail Fashion Planning Bundle Reports package includes role-based Oracle Business Intelligence (BI) Enterprise Edition (EE) reports and dashboards that provide an illustrative overview highlighting the Fashion Planning Bundle solutions. These dashboards can be leveraged out-of-the-box or can be used along with the other dashboards and reports that may have already been created to support a specific solution or organizational needs. This package includes dashboards for the Assortment Planning, Item Planning, Item Planning Configured for COE, Merchandise Financial Planning Retail Accounting, and Merchandise Financial Planning Cost Accounting applications. Oracle Retail Accelerators for WebLogic Server 11g Micro-Applications Development TutorialThis tutorial describes how you can create a Micro-Application for the Create a Regular Item task in the Retail Merchandising System (RMS) application using Oracle JDeveloper and ADF. Retail Accelerators: Developing ADF Reports for RPASThis document illustrates how you can use the Oracle Application Development Framework 11g (ADF) to generate reports that provide insights from the Oracle Retail Predictive Application Server (RPAS) based applications. Oracle Retail Accelerators Guide for WebCenter 11gOracle Retail Accelerators Guide for WebCenter 11g describes how you can integrate Oracle Retail applications with Oracle WebCenter Spaces and customize WebCenter Spaces to include custom-developed content. Oracle Retail Accelerators, Developing Oracle BI EE reports on RPAS Domain DataThis document illustrates how you can set up the integration between BI EE and RPAS domains to generate BI EE reports and dashboards for RPAS. Oracle Retail Accelerators, Developing Oracle BI EE Reports on RPAS WorkbooksThis document outlines a process to create real-time Oracle Business Intelligence (BI) Enterprise Edition reports against RPAS workbooks dynamically, as opposed to directly going against the RPAS domain for the data. 

    Read the article

  • SQL Server service accounts and SPNs

    - by simonsabin
    Service Principal Names (SPNs) are a must for kerberos authentication which is a must when using sharepoint, reporting services and sql server where you access one server that then needs to access another resource, this is called the double hop. The reason this is a complex problem is that the second hop has to be done with impersonation/delegation. For this to work there needs to be a way for the security system to make sure that the service in the middle is allowed to impersonate you, after all you are not giving the service your password. To do this you need to be using kerberos. The following is my simple interpretation of how kerberos works. I find the Kerberos documentation rediculously complex so the following might be sligthly wrong but I think its close enough. Keberos works on a ticketing system, the prinicipal is that you get a security token from AD and then you can pass that to the service in the middle which can then use that token to impersonate you. For that to work AD has to be able to identify who is allowed to use the token, in this case the service account.But how do you as a client know what service account the service in the middle is configured with. The answer is SPNs. The SPN is the mapping between your logical connection to the service account. One type of SPN is for the DNS name for the server and the port. i.e. MySQL.mydomain.com and 1433. You can see how this maps to SQL Server on that server, but how does it map to the account. Well it can be done in two ways, either you can have a mapping defined in AD or AD can use a default mapping (this is something I didn't know about). To map the SPN in AD then you have to add the SPN to the user account, this is documented in the first link below either directly or using a tool called SetSPN. You might say that is complex, well it is and thats why SQL Server tries to do it for you, at start up it tries to connect to AD and set the SPN on the account it is running as, clearly that can only happen IF SQL is running as a domain account AND importantly it has permission to do so. By default a normal domain user account doesn't have the correct permission, and is why so many people have this problem. If the account is a domain admin then it will have permission, but non of us run SQL using domain admin accounts do we. You might also note that the SPN contains the port number (this isn't a requirement now in sql 2008 but I won't go into that), so if you set it manually and you are using dynamic ports (the default for a named instance) what do you do, well every time the port changes you need to change the SPN allocated to the account. Thats why its advised to let SQL Server register the SPN itself. You may also have thought, well what happens if I change my service account, won't that lead to two accounts with the same SPN. Possibly. Having two accounts with the same SPN is definitely a problem. Why? Well because if there are two accounts Kerberos can't identify the exact account that the service is running as, it could be either account, and so your security falls back to NTLM. SETSPN is useful for finding duplicate SPNs Reading this you will probably be thinking Oh my goodness this is really difficult. It is however I've found today in investigating something else that there is an easy option. Use Network Service as your service account. Network Service is a special account and is tied to the computer. It appears that Network Service has the update rights to AD to set an SPN mapping for the computer account. This then allows the SPN mapping to work. I believe this also works for the local system account. To get all the SPNs in your AD run the following, it could be a large file, so you might want to restrict it to a specific OU, or CN ldifde -d "DC=<domain>" -l servicePrincipalName -F spn.txt You will read in the links below that you need SQL to register the SPN this is done how to use Kerberos authenticaiton in SQL Server - http://support.microsoft.com/kb/319723 Using Kerberos with SQL Server - http://blogs.msdn.com/sql_protocols/archive/2005/10/12/479871.aspx Understanding Kerberos and NTLM authentication in SQL Server Connections - http://blogs.msdn.com/sql_protocols/archive/2006/12/02/understanding-kerberos-and-ntlm-authentication-in-sql-server-connections.aspx Summary The only reason I personally know to use a domain account is when you can't get kerberos to work and you want to do BULK INSERT or other network service that requires access to a a remote server. In this case you have to resort to using SQL authentication and the SQL Server uses its service account to access the remote service, and thus you need a domain account. You migth need this if using some forms of replication. I've always found Kerberos awkward to setup and so fallen back to this domain account approach. So in summary to get Kerberos to work try using the network service or local system accounts. For a great post from the Adam Saxton of the SQL Server support team go to http://blogs.msdn.com/psssql/archive/2010/03/09/what-spn-do-i-use-and-how-does-it-get-there.aspx 

    Read the article

  • SQL query performance optimization (TimesTen)

    - by Sergey Mikhanov
    Hi community, I need some help with TimesTen DB query optimization. I made some measures with Java profiler and found the code section that takes most of the time (this code section executes the SQL query). What is strange that this query becomes expensive only for some specific input data. Here’s the example. We have two tables that we are querying, one represents the objects we want to fetch (T_PROFILEGROUP), another represents the many-to-many link from some other table (T_PROFILECONTEXT_PROFILEGROUPS). We are not querying linked table. These are the queries that I executed with DB profiler running (they are the same except for the ID): Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; < 1169655247309537280 > < 1169655249792565248 > < 1464837997699399681 > 3 rows found. Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; < 1169655247309537280 > 1 row found. This is what I have in the profiler: 12:14:31.147 1 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272 12:14:31.147 2 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:47) cmdType:100, cmdNum:1146695. 12:14:31.147 3 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.147 4 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 5 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 6 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 7 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 8 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:35.243 9 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928 12:14:35.243 10 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:44) cmdType:100, cmdNum:1146697. 12:14:35.243 11 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 12 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 13 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 14 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; It’s clear that the first query took almost 100ms, while the second was executed instantly. It’s not about queries precompilation (the first one is precompiled too, as same queries happened earlier). We have DB indices for all columns used here: T_PROFILEGROUP.M_ID, T_PROFILECONTEXT_PROFILEGROUPS.M_ID_OID and T_PROFILECONTEXT_PROFILEGROUPS.M_ID_EID. My questions are: Why querying the same set of tables yields such a different performance for different parameters? Which indices are involved here? Is there any way to improve this simple query and/or the DB to make it faster? UPDATE: to give the feeling of size: Command> select count(*) from T_PROFILEGROUP; < 183840 > 1 row found. Command> select count(*) from T_PROFILECONTEXT_PROFILEGROUPS; < 2279104 > 1 row found.

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >