Search Results

Search found 28024 results on 1121 pages for 'sql 2014'.

Page 1009/1121 | < Previous Page | 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016  | Next Page >

  • Oracle TNS problems ?

    - by persistence
    I have an error ? My pl/Sql Developer says my oracle database cannot find the service descriptor But when I Do a check the listener I get this error. LSNRCTL> start Starting tnslsnr: please wait... Service OracleOraDb10g_home1TNSListener already running. TNS-12560: TNS:protocol adapter error TNS-00530: Protocol adapter error LSNRCTL> status Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP TNS-12541: TNS:no listener TNS-12560: TNS:protocol adapter error TNS-00511: No listener 32-bit Windows Error: 61: Unknown error Please I have a deadline for these evening. Please help.

    Read the article

  • How to improve INSERT INTO ... SELECT locking behavior

    - by Artem
    In our production database, we ran the following pseudo-code SQL batch query running every hour: INSERT INTO TemporaryTable (SELECT FROM HighlyContentiousTableInInnoDb WHERE allKindsOfComplexConditions are true) Now this query itself does not need to be fast, but I noticed it was locking up HighlyContentiousTableInInnoDb, even though it was just reading from it. Which was making some other very simple queries take ~25 seconds (that's how long that other query takes). Then I discovered that InnoDB tables in such a case are actually locked by a SELECT! http://www.mysqlperformanceblog.com/2006/07/12/insert-into-select-performance-with-innodb-tables/ But I don't really like the solution in the article of selecting into an OUTFILE, it seems like a hack (temporary files on filesystem seem sucky). Any other ideas? Is there a way to make a full copy of an InnoDB table without locking it in this way during the copy. Then I could just copy the HighlyContentiousTable to another table and do the query there.

    Read the article

  • Service Broker message_body error when casting binary data to xml in C#

    - by TimBuckTwo
    I am using Message Broker with Sql server 2008, and designing an External Activator service to consume messages from my target queue. My Problem: Cant cast the returned message body from the SqlDataReader object: "WAITFOR (RECEIVE TOP(1) conversation_handle, message_type_name, message_body FROM [{1}]), TIMEOUT {2}" operation, I cant cast the binary data to XML in C# SqlBinary MessageBody = reader.GetSqlBinary(2); MemoryStream memstream = new MemoryStream(); XmlDocument xmlDoc = new XmlDocument(); memstream.Write(MessageBody.Value, 0, MessageBody.Length); memstream.Position= 0; //below line Fails With Error:{"Data at the root level is invalid. Line 1, position 1."} xmlDoc.LoadXml(Encoding.ASCII.GetString(memstream.ToArray())); memstream.Close(); To prevent poison message I do not use CAST(message_body as XML), Any suggestions would be greatly appreciated.

    Read the article

  • MYSQL select query where multiple conditions in same column must exist

    - by David
    I'm putting together a dating site and I'm having a mysql query issue. This works: SELECT * FROM `user` , `desired_partner` , `user_personality` WHERE dob BETWEEN '1957-05-18' AND '1988-05-18' AND country_id = '190' AND user.gender_id = '1' AND user.user_id = desired_partner.user_id AND desired_partner.gender_id = '2' AND user.user_id = user_personality.user_id AND user_personality.personality_id = '2' The sql finds any male (gender_id=1) with ATLEAST personality trait 2 (and possibly other personality traits) between certain age range in the USA (country_id=190) looking for a female (gender_id=2). Question 1) How do I make it so it returns those with personality type 2 ONLY and no other personality traits? Find any man in the USA that is between 22 and 53 that is of personality type 2 (only) that is looking for a woman. Question 2) Supposing I want to find someone that matches personality type 1, personality type 2, and personality type 5 ONLY. There are 14 personality traits in the database and a user can be associated with any of them. Find any man in the USA that is between 22 and 53 that is of personality type 1, 2, and 5 (ONLY) that is looking for a woman.

    Read the article

  • Converting Encrypted Values

    - by Johnm
    Your database has been protecting sensitive data at rest using the cell-level encryption features of SQL Server for quite sometime. The employees in the auditing department have been inviting you to their after-work gatherings and buying you drinks. Thousands of customers implicitly include you in their prayers of thanks giving as their identities remain safe in your company's database. The cipher text resting snuggly in a column of the varbinary data type is great for security; but it can create some interesting challenges when interacting with other data types such as the XML data type. The XML data type is one that is often used as a message type for the Service Broker feature of SQL Server. It also can be an interesting data type to capture for auditing or integrating with external systems. The challenge that cipher text presents is that the need for decryption remains even after it has experienced its XML metamorphosis. Quite an interesting challenge nonetheless; but fear not. There is a solution. To simulate this scenario, we first will want to create a plain text value for us to encrypt. We will do this by creating a variable to store our plain text value: -- set plain text value DECLARE @PlainText NVARCHAR(255); SET @PlainText = 'This is plain text to encrypt'; The next step will be to create a variable that will store the cipher text that is generated from the encryption process. We will populate this variable by using a pre-defined symmetric key and certificate combination: -- encrypt plain text value DECLARE @CipherText VARBINARY(MAX); OPEN SYMMETRIC KEY SymKey     DECRYPTION BY CERTIFICATE SymCert     WITH PASSWORD='mypassword2010';     SET @CipherText = EncryptByKey                          (                            Key_GUID('SymKey'),                            @PlainText                           ); CLOSE ALL SYMMETRIC KEYS; The value of our newly generated cipher text is 0x006E12933CBFB0469F79ABCC79A583--. This will be important as we reference our cipher text later in this post. Our final step in preparing our scenario is to create a table variable to simulate the existence of a table that contains a column used to hold encrypted values. Once this table variable has been created, populate the table variable with the newly generated cipher text: -- capture value in table variable DECLARE @tbl TABLE (EncVal varbinary(MAX)); INSERT INTO @tbl (EncVal) VALUES (@CipherText); We are now ready to experience the challenge of capturing our encrypted column in an XML data type using the FOR XML clause: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT               EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); If you add the SELECT @XML statement at the end of this portion of the code you will see the contents of the XML data in its raw format: <root>   <MYTABLE EncVal="AG4Skzy/sEafeavMeaWDBwEAAACE--" /> </root> Strangely, the value that is captured appears nothing like the value that was created through the encryption process. The result being that when this XML is converted into a readable data set the encrypted value will not be able to be decrypted, even with access to the symmetric key and certificate used to perform the decryption. An immediate thought might be to convert the varbinary data type to either a varchar or nvarchar before creating the XML data. This approach makes good sense. The code for this might look something like the following: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT              CONVERT(NVARCHAR(MAX),EncVal) AS EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); However, this results in the following error: Msg 9420, Level 16, State 1, Line 26 XML parsing: line 1, character 37, illegal xml character A quick query that returns CONVERT(NVARCHAR(MAX),EncVal) reveals that the value that is causing the error looks like something off of a genuine Chinese menu. While this situation does present us with one of those spine-tingling, expletive-generating challenges, rest assured that this approach is on the right track. With the addition of the "style" argument to the CONVERT method, our solution is at hand. When dealing with converting varbinary data types we have three styles available to us: - The first is to not include the style parameter, or use the value of "0". As we see, this style will not work for us. - The second option is to use the value of "1" will keep our varbinary value including the "0x" prefix. In our case, the value will be 0x006E12933CBFB0469F79ABCC79A583-- - The third option is to use the value of "2" which will chop the "0x" prefix off of our varbinary value. In our case, the value will be 006E12933CBFB0469F79ABCC79A583-- Since we will want to convert this back to varbinary when reading this value from the XML data we will want the "0x" prefix, so we will want to change our code as follows: -- capture set in xml DECLARE @xml XML; SET @xml = (SELECT              CONVERT(NVARCHAR(MAX),EncVal,1) AS EncVal             FROM @tbl AS MYTABLE             FOR XML AUTO, BINARY BASE64, ROOT('root')); Once again, with the inclusion of the SELECT @XML statement at the end of this portion of the code you will see the contents of the XML data in its raw format: <root>   <MYTABLE EncVal="0x006E12933CBFB0469F79ABCC79A583--" /> </root> Nice! We are now cooking with gas. To continue our scenario, we will want to parse the XML data into a data set so that we can glean our freshly captured cipher text. Once we have our cipher text snagged we will capture it into a variable so that it can be used during decryption: -- read back xml DECLARE @hdoc INT; DECLARE @EncVal NVARCHAR(MAX); EXEC sp_xml_preparedocument @hDoc OUTPUT, @xml; SELECT @EncVal = EncVal FROM OPENXML (@hdoc, '/root/MYTABLE') WITH ([EncVal] VARBINARY(MAX) '@EncVal'); EXEC sp_xml_removedocument @hDoc; Finally, the decryption of our cipher text using the DECRYPTBYKEYAUTOCERT method and the certificate utilized to perform the encryption earlier in our exercise: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                            CERT_ID('AuditLogCert'),                            N'mypassword2010',                            @EncVal                           )                     ) EncVal; Ah yes, another hurdle presents itself! The decryption produced the value of NULL which in cryptography means that either you don't have permissions to decrypt the cipher text or something went wrong during the decryption process (ok, sometimes the value is actually NULL; but not in this case). As we see, the @EncVal variable is an nvarchar data type. The third parameter of the DECRYPTBYKEYAUTOCERT method requires a varbinary value. Therefore we will need to utilize our handy-dandy CONVERT method: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                             CERT_ID('AuditLogCert'),                             N'mypassword2010',                             CONVERT(VARBINARY(MAX),@EncVal)                           )                     ) EncVal; Oh, almost. The result remains NULL despite our conversion to the varbinary data type. This is due to the creation of an varbinary value that does not reflect the actual value of our @EncVal variable; but rather a varbinary conversion of the variable itself. In this case, something like 0x3000780030003000360045003--. Considering the "style" parameter got us past XML challenge, we will want to consider its power for this challenge as well. Knowing that the value of "1" will provide us with the actual value including the "0x", we will opt to utilize that value in this case: SELECT     CONVERT(NVARCHAR(MAX),                     DecryptByKeyAutoCert                          (                            CERT_ID('SymCert'),                            N'mypassword2010',                            CONVERT(VARBINARY(MAX),@EncVal,1)                           )                     ) EncVal; Bingo, we have success! We have discovered what happens with varbinary data when captured as XML data. We have figured out how to make this data useful post-XML-ification. Best of all we now have a choice in after-work parties now that our very happy client who depends on our XML based interface invites us for dinner in celebration. All thanks to the effective use of the style parameter.

    Read the article

  • Save entity with entity framework

    - by Michel
    Hi, I'm saving entities/records with the EF, but i'm curious if there is another way of doing it. I receive a class from a MVC controller method, so basicly i have all the info: the class's properties, including the primary key. Without EF i would do a Sql update (update table set a=b, c=d where id = 5), but with EF i got no further than this: Get an object with ID of 5 Update the (existing) object with the new object Submitchanges. What bothers me is that i have to get the object from the database first, where i have all the info to do an update statement. Is there another way of doing this?

    Read the article

  • ODBC and Windows Service

    - by DNS
    Hi, I'm new to windows services and... you guessed it, I’m a bit stuck. Let me paint the picture – I’m running a timed service that use an OdbcDataReader and SqlBulkCopy to (1) archive the data (2) normalize the data on a SQL box. When I run this code in a windows form proj. it works fine. Then, when I change the DNS’s Data Directory Path to a local drive, instead of the network share (just simulated the environment locally), it works as well. I’m obviously missing something. Any help will be appreciated. DNS

    Read the article

  • Insert rows into MySQL table while changing one value

    - by Jonathan
    Hey all- I have a MySQL table that defines values for a specific customer type. | CustomerType | Key Field 1 | Key Field 2 | Value | Each customer type has 10 values associated with it (based on the other two key fields). I am creating a new customer type (TypeB) that is exactly the same as another customer type (TypeA). I want to insert "TypeB" as the CustomerType but then just copy the values from TypeA's rows for the other three fields. Is there an SQL insert statement to make this happen? Somthing like: insert into customers(customer_type, key1, key2, value) values "TypeB" union select key1, key2, value from customers where customer_type = "TypeA" Thanks- Jonathan

    Read the article

  • Adhoc Data processing / ETL

    - by Dane
    I've just started at a new company in outsourced communications (e.g. print and mail, email, fax). One of the requirements is to process clients data and get it ready for mailing. For recurring jobs, this is easy using an ETL tool linked in with some addressing software, but for adhoc stuff it's a bit overkill. I've used inhouse developed stuff before (clunky but usable), but I don't want to have to re-develop that here. Any recommendations? Some features : Basic DBMS functionality (preferably with a proper DBMS backend for SQL support) Field concatenation (e.g. combine Firstname + Surname) "Pushing columns" (e.g. with address fields 1 - 8, push them left so if one is blank, the next one gets pushed up) Australia post mail sorting and dpid allocation (or can link into external tools relatively easily)

    Read the article

  • JPA optimistic lock - setting @Version to entity class cause query to include VERSION as column

    - by masato-san
    I'm using JPA Toplink Essential, Netbean6.8, GlassFish v3 In my Entity class I added @Version annotation to enable optimistic locking at transaction commit however after I added the annotation, my query started including VERSION as column thus throwing SQL exception. None of this is mentioned in any tutorial I've seen so far. What could be wrong? Snippet public class MasatosanTest2 implements Serializable { private static final long serialVersionUID = 1L; @Id @Basic(optional = false) @Column(name = "id") private Integer id; @Column(name = "username") private String username; @Column(name = "note") private String note; //here adding Version @Version int version; query used: SELECT m FROM MasatosanTest2 m Internal Exception: com.microsoft.sqlserver.jdbc.SQLServerException Call: SELECT id, username, note, VERSION FROM MasatosanTest2

    Read the article

  • SQLite as an App Queue, Exclusive Row Lock?

    - by ScSub
    I am considering using SQLite as a "job queue container", and was wondering how I could do so, using custom C# (with ADO.NET) to work the database. If this was SQL Server, I would setup a serializable transaction to make sure the parent row and child rows were exclusively mine until I was done. I'm not sure how that would work in SQLite, can anyone offer any assistance? Or if there are any other existing implementations of message queueing with SQLite, I'd appreciate any pointers in that direction as well. Thanks!

    Read the article

  • NLB and Host Header Value

    - by Hafeez
    Background: We are using MOSS 2007 in farm configuration, 2 WFE, 1 Indexer and SQL Server. MS NLB is used for load balancing. Host header value mapped to Virtual IP of Cluster in DNS, is used while creating the web applications in MOSS and all are sharing port 80. Problem: When client tries to access the web application that are configured with host header values. Both of WFEs Hangs for 5 minutes, they stop responding to ping and browser shows 'Page not found'. In the Application Log on the WFE, this error is registered "provider: TCP Provider, error: 0 - The semaphore timeout period has expired". Interestingly, the web application with no host header value and hosted on different ports is working correctly. Any clue to solve this problem will be helpful. Thks. Hafeez

    Read the article

  • Problem using FluentNHibernate, SQLite and Enums

    - by weenet
    I have a Sharp Architecture based app using Fluent NHibernate with Automapping. I have the following Enum: public enum Topics { AdditionSubtraction = 1, MultiplicationDivision = 2, DecimalsFractions = 3 } and the following Class: public class Strategy : BaseEntity { public virtual string Name { get; set; } public virtual Topics Topic { get; set; } public virtual IList Items { get; set; } } If I create an instance of the class thusly: Strategy s = new Strategy { Name = "Test", Topic = Topics.AdditionSubtraction }; it Saves correctly (thanks to this mapping convention: public class EnumConvention : IPropertyConvention, IPropertyConventionAcceptance { public void Apply(FluentNHibernate.Conventions.Instances.IPropertyInstance instance) { instance.CustomType(instance.Property.PropertyType); } public void Accept(FluentNHibernate.Conventions.AcceptanceCriteria.IAcceptanceCriteria criteria) { criteria.Expect(x = x.Property.PropertyType.IsEnum); } } However, upon retrieval, I get an error regarding an attempt to convert Int64 to Topics. This works fine in SQL Server. Any ideas for a workaround? Thanks.

    Read the article

  • Configure Hibernate validation for bean

    - by sergionni
    Hi. I need to perform validation based on SQL query result. Query is defined as annotation - as @NamedQuery in my entity bean. According to Hibernate documentation(doc), there is possibility to validate bean on following operations: pre-update pre-insert pre-delete looks like: <hibernate-configuration> <session-factory> ... <event type="pre-update"> <listener class="org.hibernate.cfg.beanvalidation.BeanValidationEventListener"/> </event> <event type="pre-insert"> <listener class="org.hibernate.cfg.beanvalidation.BeanValidationEventListener"/> </event> <event type="pre-delete"> <listener class="org.hibernate.cfg.beanvalidation.BeanValidationEventListener"/> </event> </hibernate-configuration> The question is how to connect my bean with the validation configuration, described above.

    Read the article

  • How do I use SimpleRepository without the migration approach?

    - by Bill Sempf
    I am evaluating SubSonic for use in Phase 2 of a large project. This is an ASP.NET project, with 700 tables in a SQL Server database. We are planning for our domain model to consist of POCO classes to assist with an offline access requirements we have. I believe that the SimpleRepository pattern would be among my best options. Since I have a database already, however, the migration assistance doesn't help me. Are there T4 templates for SimpleRepository that I just overlooked? How do I 'turn off' migration? If I missed something in the Wiki, point me there, otherwise get me started and I'll write up a Wiki entry for y'all when we get there.

    Read the article

  • regex to match postgresql bytea

    - by filiprem
    In PostgreSQL, there is a BLOB datatype called bytea. It's just an array of bytes. bytea literals are output in the following way: '\\037\\213\\010\\010\\005`Us\\000\\0001.fp3\'\\223\\222%' See PostgreSQL docs for full definition of the format. I'm trying to construct a Perl regular expression which will match any such string. It should also match standard ANSI SQL string literals, like 'Joe', 'Joe''s Mom', 'Fish Called ''Wendy''' It should also match backslash-escaped variant: 'Joe\'s Mom', . First aproach (shown below) works only for some bytea representations. s{ ' # Opening apostrophe (?: # Start group [^\\\'] # Anything but a backslash or an apostrophe | # or \\ . # Backslash and anything | # or \'\' # Double apostrophe )* # End of group ' # Closing apostrophe }{LITERAL_REPLACED}xgo; For other (longer ones, with many escaped apostrophes, Perl gives such warning: Complex regular subexpression recursion limit (32766) exceeded at ./sqa.pl line 33, < line 1. So I am looking for a better (but still regex-based) solution, it probably requires some regex alchemy (avoiding backreferences and all).

    Read the article

  • How should I rewrite my code to make it amenable to unittesting?

    - by justin
    I've been trying to get started with unit-testing while working on a little cli program. My program basically parses the command line arguments and options, and decides which function to call. Each of the functions performs some operation on a database. So, for instance, I might have a create function: def create(self, opts, args): #I've left out the error handling. strtime = datetime.datetime.now().strftime("%D %H:%M") vals = (strtime, opts.message, opts.keywords, False) self.execute("insert into mytable values (?, ?, ?, ?)", vals) self.commit() Should my test case call this function, then execute the select sql to check that the row was entered? That sounds reasonable, but also makes the tests more difficult to maintain. Would you rewrite the function to return something and check for the return value? Thanks

    Read the article

  • Naming convention for the primary key in a Table

    - by kwokwai
    Hi all, I am learning the Model relationship types in cakephp. I have built twotables and in one of the Table A, I got these fields in it: Table A {postID, topic, content} Table B {replyID, content, postID} And when I ran the web page, a bunch of error related to SQL popped up saying that cakephp couldn't find post_id. It is weird that I have already declared the $primaryKey to be using postID in the tableA.php under Models folder, but cakephp seemed want me to change the ID field to post_id instead of postID, because the error disappeared after I have changed the primaryKey to post_id. ANy ideas?

    Read the article

  • What is Boost missing?

    - by Robert Gould
    After spending most of my waking time on Stack Overflow, for better or for worse, I've come to notice how 99% of the C++ questions are answered with "use boost::wealreadysolvedyourproblem", but there must definitely be a few areas Boost doesn't cover, but would be better if it did. So what features is Boost missing? I'll start by saying: boost::sql (although SOCI should try to become a legal part of boost) boost::json (although TinyJSON should try to become a legal part of boost) boost::audio (no idea about a good boost-like C++ library) PS: The purpose is to compile a reasonable list, and hopefully Boost-like solutions out there that aren't yet a part of Boost, so no silly stuff like boost::turkey please.

    Read the article

  • NHibernate transaction management in ASP.NET MVC - how should it be done?

    - by adrin
    I am writing a simple ASP.NET MVC using session per request and transaction per request patterns (custom HttpModule). It seems to work properly, but.. the performance is terrible (a simple page loads ~7 seconds). For every http request, graphical resources incuding (all images on the site) a transaction is created and that seems to delay the loading times (without the transactions loading times per one image are ~1-10 ms with transactions they are over 1 second). What is the proper way to manage transactions in ASP.NET MVC + NH stack? When i've put all transactions into my repository methods, for some obscure reasons I got 'implicit transactions' warning in NHProf (the SQL statements were executed outside transaction, even that in code session.Save()/Update()/etc methods were invoked within transaction 'using' scope and before transaction.Commit() call) BTW are implicit transactions really bad?

    Read the article

  • C# to loop through folder until it finds the correct files

    - by Liton Uddin
    I’m running a batch to do an update to my sql table. I’m using windows scheduler to run the batch file. Each day files come in at different time. Sometime they come in after my scheduled time therefore the batch file doesn’t run when there’s no file before the scheduled task in the folder. I want to create a c# program where it will loop through the folder until it finds the files and then move to the next step in the batch file. Basically my goal is to create a program that will look for those particular files and once they finds the correct files then it will start the update. I’m new to programming and need some direction. Can someone please help me with this? Thanks.

    Read the article

  • Hibernate schema parameter doesn't work in @SequenceGenerator annotation

    - by tabdulin
    I hav the following code: @Entity @Table(name = "my_table", schema = "my_schema") @SequenceGenerator(name = "my_table_id_seq", sequenceName = "my_table_id_seq", schema = "my_schema") public class MyClass { @Id @GeneratedValue(generator = "my_table_id_seq", strategy = GenerationType.SEQUENCE) private int id; } Database: Postgresql 8.4, Hibernate annotations 3.5.0-Final. When saving the object of MyClass it generates the following SQL query: select nextval('my_table_id_seq') So there is no schema prefix and therefore the sequence cannot be found. When I write the sequenceName like sequenceName = "my_schema.my_table_id_seq" everything works. Do I have misunderstandings for meaning of schema parameter or is it a bug? Any ideas how to make schema parameter working?

    Read the article

  • Logging which is the best way

    - by Tony
    Hi People who talk about loggers here never talke about EventLog, I think this is good for windows system. Is it reliable, or I found it dead in some bad morning? Why not logging everything at SQLServer, I am creating E-Commerce website, if SQL server down the website will be down anyway. but I am worry about temporally connection failure, what do u think? Why everyone like files, it can be in great size, too big to handle, or maybe I will create another file when a file is too big, and I can create a file with a date. Some one tried MS Enterprise library? talk to me about it. Thanks

    Read the article

  • Inspect in memory hsqldb while debugging

    - by Albert
    We're using hdsqldb in memory to run junit tests which operate against a database. The db is setup before running each test via a spring configuration. All works fine. Now when a tests fails it can be convinient to be able to inspect the values in the in memory database. Is this possible? If so how? Our url is: jdbc.url=jdbc:hsqldb:mem:testdb;sql.enforce_strict_size=true The database is destroyed after each tests. But when the debugger is running the database should also still be alive. I've tried connecting with the sqldb databaseManager. That works, but I don't see any tables or data. Any help is highly appreciated!

    Read the article

  • Add Core Data Index to certain Attributes via migration

    - by steipete
    For performance reasons, i want to set the Indexed Attribute to some of my entities. I created a new core data model version to perform the changes. Core Data detects the changes and migrates my model to the new version, however, NO INDEXES ARE GENERATED. If I recreate the database from scratch, the indexes are there. I checked with SQLite Browser both on the iPhone and on the Simulator. The problem only occurs if a database in the prior format is already there. Is there a way to manually add the indexes? Write some sql for that? Or am I missing something? I did already some more critical migrations, no problems there. But those missing indexes are bugging me. Thanks for helping!

    Read the article

< Previous Page | 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016  | Next Page >