Search Results

Search found 5638 results on 226 pages for 'debian sys maint'.

Page 79/226 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Incompatible types when assigning to type 'struct compartido'

    - by user1660559
    I have one problem with this code. I should create one structure and share it across 5 new process created from the father: #include <stdio.h> #include <stdlib.h> #include <sys/wait.h> #include <unistd.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <sys/sem.h> #include <time.h> struct compartido { int pid1, pid2, pid3, pid4, pid5; int propietario; int contador; int pidpadre; }; struct compartido var; int main(int argc, char *argv[]) { key_t llave1,llavesem; int idmem,idsem; llave1=ftok("/tmp",'a'); idmem=shmget(llave1,sizeof(int),IPC_CREAT|0600); if (idmem==-1) { perror ("shmget"); return 1; } var=shmat(idmem,0,0); /*This line is giving the error*/ /*rest of the code*/ } The exact error is giving is: error: incompatible types when assigning to type 'struct compartido' from type 'void *' I need to put this structure in the shared variable to be able to see and modify all those data from the 6 process (5 children and the father). What I'm doing bad? Thanks in advance and best regards,

    Read the article

  • Stored procedure to remove FK of a given table

    - by Nicole
    I need to create a stored procedure that: Accepts a table name as a parameter Find its dependencies (FKs) Removes them Truncate the table I created the following so far based on http://www.mssqltips.com/sqlservertip/1376/disable-enable-drop-and-recreate-sql-server-foreign-keys/ . My problem is that the following script successfully does 1 and 2 and generates queries to alter tables but does not actually execute them. In another word how can execute the resulting "Alter Table ..." queries to actually remove FKs? CREATE PROCEDURE DropDependencies(@TableName VARCHAR(50)) AS BEGIN SELECT 'ALTER TABLE ' + OBJECT_SCHEMA_NAME(parent_object_id) + '.[' + OBJECT_NAME(parent_object_id) + '] DROP CONSTRAINT ' + name FROM sys.foreign_keys WHERE referenced_object_id=object_id(@TableName) END EXEC DropDependencies 'TableName' Any idea is appreciated! Update: I added the cursor to the SP but I still get and error: "Msg 203, Level 16, State 2, Procedure DropRestoreDependencies, Line 75 The name 'ALTER TABLE [dbo].[ChildTable] DROP CONSTRAINT [FK__ChileTable__ParentTable__745C7C5D]' is not a valid identifier." Here is the updated SP: CREATE PROCEDURE DropRestoreDependencies(@schemaName sysname, @tableName sysname) AS BEGIN SET NOCOUNT ON DECLARE @operation VARCHAR(10) SET @operation = 'DROP' --ENABLE, DISABLE, DROP DECLARE @cmd NVARCHAR(1000) DECLARE @FK_NAME sysname, @FK_OBJECTID INT, @FK_DISABLED INT, @FK_NOT_FOR_REPLICATION INT, @DELETE_RULE smallint, @UPDATE_RULE smallint, @FKTABLE_NAME sysname, @FKTABLE_OWNER sysname, @PKTABLE_NAME sysname, @PKTABLE_OWNER sysname, @FKCOLUMN_NAME sysname, @PKCOLUMN_NAME sysname, @CONSTRAINT_COLID INT DECLARE cursor_fkeys CURSOR FOR SELECT Fk.name, Fk.OBJECT_ID, Fk.is_disabled, Fk.is_not_for_replication, Fk.delete_referential_action, Fk.update_referential_action, OBJECT_NAME(Fk.parent_object_id) AS Fk_table_name, schema_name(Fk.schema_id) AS Fk_table_schema, TbR.name AS Pk_table_name, schema_name(TbR.schema_id) Pk_table_schema FROM sys.foreign_keys Fk LEFT OUTER JOIN sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id --inner join WHERE TbR.name = @tableName AND schema_name(TbR.schema_id) = @schemaName OPEN cursor_fkeys FETCH NEXT FROM cursor_fkeys INTO @FK_NAME,@FK_OBJECTID, @FK_DISABLED, @FK_NOT_FOR_REPLICATION, @DELETE_RULE, @UPDATE_RULE, @FKTABLE_NAME, @FKTABLE_OWNER, @PKTABLE_NAME, @PKTABLE_OWNER WHILE @@FETCH_STATUS = 0 BEGIN -- create statement for dropping FK and also for recreating FK IF @operation = 'DROP' BEGIN -- drop statement SET @cmd = 'ALTER TABLE [' + @FKTABLE_OWNER + '].[' + @FKTABLE_NAME + '] DROP CONSTRAINT [' + @FK_NAME + ']' EXEC @cmd -- create process DECLARE @FKCOLUMNS VARCHAR(1000), @PKCOLUMNS VARCHAR(1000), @COUNTER INT -- create cursor to get FK columns DECLARE cursor_fkeyCols CURSOR FOR SELECT COL_NAME(Fk.parent_object_id, Fk_Cl.parent_column_id) AS Fk_col_name, COL_NAME(Fk.referenced_object_id, Fk_Cl.referenced_column_id) AS Pk_col_name FROM sys.foreign_keys Fk LEFT OUTER JOIN sys.tables TbR ON TbR.OBJECT_ID = Fk.referenced_object_id INNER JOIN sys.foreign_key_columns Fk_Cl ON Fk_Cl.constraint_object_id = Fk.OBJECT_ID WHERE TbR.name = @tableName AND schema_name(TbR.schema_id) = @schemaName AND Fk_Cl.constraint_object_id = @FK_OBJECTID -- added 6/12/2008 ORDER BY Fk_Cl.constraint_column_id OPEN cursor_fkeyCols FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME SET @COUNTER = 1 SET @FKCOLUMNS = '' SET @PKCOLUMNS = '' WHILE @@FETCH_STATUS = 0 BEGIN IF @COUNTER > 1 BEGIN SET @FKCOLUMNS = @FKCOLUMNS + ',' SET @PKCOLUMNS = @PKCOLUMNS + ',' END SET @FKCOLUMNS = @FKCOLUMNS + '[' + @FKCOLUMN_NAME + ']' SET @PKCOLUMNS = @PKCOLUMNS + '[' + @PKCOLUMN_NAME + ']' SET @COUNTER = @COUNTER + 1 FETCH NEXT FROM cursor_fkeyCols INTO @FKCOLUMN_NAME,@PKCOLUMN_NAME END CLOSE cursor_fkeyCols DEALLOCATE cursor_fkeyCols END FETCH NEXT FROM cursor_fkeys INTO @FK_NAME,@FK_OBJECTID, @FK_DISABLED, @FK_NOT_FOR_REPLICATION, @DELETE_RULE, @UPDATE_RULE, @FKTABLE_NAME, @FKTABLE_OWNER, @PKTABLE_NAME, @PKTABLE_OWNER END CLOSE cursor_fkeys DEALLOCATE cursor_fkeys END For running use: EXEC DropRestoreDependencies dbo, ParentTable

    Read the article

  • node.js callback getting unexpected value for variable

    - by defrex
    I have a for loop, and inside it a variable is assigned with var. Also inside the loop a method is called which requires a callback. Inside the callback function I'm using the variable from the loop. I would expect that it's value, inside the callback function, would be the same as it was outside the callback during that iteration of the loop. However, it always seems to be the value from the last iteration of the loop. Am I misunderstanding scope in JavaScript, or is there something else wrong? The program in question here is a node.js app that will monitor a working directory for changes and restart the server when it finds one. I'll include all of the code for the curious, but the important bit is the parse_file_list function. var posix = require('posix'); var sys = require('sys'); var server; var child_js_file = process.ARGV[2]; var current_dir = __filename.split('/'); current_dir = current_dir.slice(0, current_dir.length-1).join('/'); var start_server = function(){ server = process.createChildProcess('node', [child_js_file]); server.addListener("output", function(data){sys.puts(data);}); }; var restart_server = function(){ sys.puts('change discovered, restarting server'); server.close(); start_server(); }; var parse_file_list = function(dir, files){ for (var i=0;i<files.length;i++){ var file = dir+'/'+files[i]; sys.puts('file assigned: '+file); posix.stat(file).addCallback(function(stats){ sys.puts('stats returned: '+file); if (stats.isDirectory()) posix.readdir(file).addCallback(function(files){ parse_file_list(file, files); }); else if (stats.isFile()) process.watchFile(file, restart_server); }); } }; posix.readdir(current_dir).addCallback(function(files){ parse_file_list(current_dir, files); }); start_server(); The output from this is: file assigned: /home/defrex/code/node/ejs.js file assigned: /home/defrex/code/node/templates file assigned: /home/defrex/code/node/web file assigned: /home/defrex/code/node/server.js file assigned: /home/defrex/code/node/settings.js file assigned: /home/defrex/code/node/apps file assigned: /home/defrex/code/node/dev_server.js file assigned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js stats returned: /home/defrex/code/node/main_urls.js For those from the future: node.devserver.js

    Read the article

  • match elements from two files, how to write the intended format to a new file

    - by user2489612
    I am trying to update my text file by matching the first column to another updated file's first column, after match it, it will update the old file. Here is my old file: Name Chr Pos ind1 in2 in3 ind4 foot 1 5 aa bb cc ford 3 9 bb cc 00 fake 3 13 dd ee ff fool 1 5 ee ff gg fork 1 3 ff gg ee Here is the new file: Name Chr Pos foot 1 5 fool 2 5 fork 2 6 ford 3 9 fake 3 13 The updated file will be like: Name Chr Pos ind1 in2 in3 ind4 foot 1 5 aa bb cc fool 2 5 ee ff gg fork 2 6 ff gg ee ford 3 9 bb cc 00 fake 3 13 dd ee ff Here is my code: #!/usr/bin/env python import sys inputfile_1 = sys.argv[1] inputfile_2 = sys.argv[2] outputfile = sys.argv[3] inputfile1 = open(inputfile_1, 'r') inputfile2 = open(inputfile_2, 'r') outputfile = open(outputfile, 'w') ind = inputfile1.readlines() cm = inputfile2.readlines()[1:] outputfile.write(ind[0]) #add header for i in ind: i = i.split() for j in cm: j = j.split() if j[0] == i[0]: outputfile.writelines(j[0:3] + i[3:]) inputfile1.close() inputfile2.close() outputfile.close() When I ran it, it returned a single column rather than the format i wanted, any suggestions? Thanks!

    Read the article

  • Python: Importing a file from a parent folder

    - by Sascha
    ...Now I know this question has been asked many times & I have looked at these other threads. Nothing so far has worked, from using sys.path.append('.') to just import foo I have a python file that wishes to import a file (that is in its parent directory). Can you help me figure out how my child file can successfully import its a file in its parent directory. I am using python 2.7 The structure is like so (each directory also has the __init__.py file in it): StockTracker/ __Comp/ ____a.py ____SubComp/ ______b.py Inside b.py, I would like to import a.py: So I have tried each of the following but I still get an error inside b.py saying "There is no such module a" import a import .a import Comp.a import StockTracker.Comp.a import os import sys sys.path.append('.') import a sys.path.remove('.')

    Read the article

  • How can I compare tables in two different databases using SQL?

    - by chama
    I'm trying to compare the schemas of two tables that exist in different databases. So far, I have this query SELECT * FROM sys.columns WHERE object_id = OBJECT_ID('table1') The only thing is that I don't know how to use the sys.columns to reference a database other than the one that the query is connected to. I tried this SELECT * FROM db.sys.columns WHERE object_id = OBJECT_ID('table1') but it didn't find anything. I'm using SQL Server 2005 Any suggestions? thanks!

    Read the article

  • How to find the worst performing queries in MS SQL Server 2008?

    - by Thomas Bratt
    How to find the worst performing queries in MS SQL Server 2008? I found the following example but it does not seem to work: SELECT TOP 5 obj.name, max_logical_reads, max_elapsed_time FROM sys.dm_exec_query_stats a CROSS APPLY sys.dm_exec_sql_text(sql_handle) hnd INNER JOIN sys.sysobjects obj on hnd.objectid = obj.id ORDER BY max_logical_reads DESC Taken from: http://www.sqlservercurry.com/2010/03/top-5-costly-stored-procedures-in-sql.html

    Read the article

  • Pass in a value into Python Class through command line

    - by chrissygormley
    Hello, I have got some code to pass in a variable into a script from the command line. The script is: import sys, os def function(var): print var class function_call(object): def __init__(self, sysArgs): try: self.function = None self.args = [] self.modulePath = sysArgs[0] self.moduleDir, tail = os.path.split(self.modulePath) self.moduleName, ext = os.path.splitext(tail) __import__(self.moduleName) self.module = sys.modules[self.moduleName] if len(sysArgs) > 1: self.functionName = sysArgs[1] self.function = self.module.__dict__[self.functionName] self.args = sysArgs[2:] except Exception, e: sys.stderr.write("%s %s\n" % ("PythonCall#__init__", e)) def execute(self): try: if self.function: self.function(*self.args) except Exception, e: sys.stderr.write("%s %s\n" % ("PythonCall#execute", e)) if __name__=="__main__": test = test() function_call(sys.argv).execute() This works by entering ./function <function> <arg1 arg2 ....>. The problem is that I want to to select the function I want that is in a class rather than just a function by itself. The code I have tried is the same except that function(var): is in a class. I was hoping for some ideas on how to modify my function_call class to accept this. Thanks for any help.

    Read the article

  • [Python] Tips for making a fraction calculator code more optimized (faster and using less memory)

    - by Logic Named Joe
    Hello Everyone, Basicly, what I need for the program to do is to act a as simple fraction calculator (for addition, subtraction, multiplication and division) for the a single line of input, for example: -input: 1/7 + 3/5 -output: 26/35 My initial code: import sys def euclid(numA, numB): while numB != 0: numRem = numA % numB numA = numB numB = numRem return numA for wejscie in sys.stdin: wyjscie = wejscie.split(' ') a, b = [int(x) for x in wyjscie[0].split("/")] c, d = [int(x) for x in wyjscie[2].split("/")] if wyjscie[1] == '+': licz = a * d + b * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) elif wyjscie[1] == '-': licz= a * d - b * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) elif wyjscie[1] == '*': licz= a * c mian= b * d nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) else: licz= a * d mian= b * c nwd = euclid(licz, mian) konA = licz/nwd konB = mian/nwd wynik = str(konA) + '/' + str(konB) print(wynik) Which I reduced to: import sys def euclid(numA, numB): while numB != 0: numRem = numA % numB numA = numB numB = numRem return numA for wejscie in sys.stdin: wyjscie = wejscie.split(' ') a, b = [int(x) for x in wyjscie[0].split("/")] c, d = [int(x) for x in wyjscie[2].split("/")] if wyjscie[1] == '+': print("/".join([str((a * d + b * c)/euclid(a * d + b * c, b * d)),str((b * d)/euclid(a * d + b * c, b * d))])) elif wyjscie[1] == '-': print("/".join([str((a * d - b * c)/euclid(a * d - b * c, b * d)),str((b * d)/euclid(a * d - b * c, b * d))])) elif wyjscie[1] == '*': print("/".join([str((a * c)/euclid(a * c, b * d)),str((b * d)/euclid(a * c, b * d))])) else: print("/".join([str((a * d)/euclid(a * d, b * c)),str((b * c)/euclid(a * d, b * c))])) Any advice on how to improve this futher is welcome. Edit: one more thing that I forgot to mention - the code can not make use of any libraries apart from sys.

    Read the article

  • SFTP in Python? (platform independent)

    - by Mark Wilbur
    I'm working on a simple tool that transfers files to a hard-coded location with the password also hard-coded. I'm a python novice, but thanks to ftplib, it was easy: import ftplib info= ('someuser', 'password') #hard-coded def putfile(file, site, dir, user=(), verbose=True): """ upload a file by ftp to a site/directory login hard-coded, binary transfer """ if verbose: print 'Uploading', file local = open(file, 'rb') remote = ftplib.FTP(site) remote.login(*user) remote.cwd(dir) remote.storbinary('STOR ' + file, local, 1024) remote.quit() local.close() if verbose: print 'Upload done.' if __name__ == '__main__': site = 'somewhere.com' #hard-coded dir = './uploads/' #hard-coded import sys, getpass putfile(sys.argv[1], site, dir, user=info) The problem is that I can't find any library that supports sFTP. What's the normal way to do something like this securely? Edit: Thanks to the answers here, I've gotten it working with Paramiko and this was the syntax. import paramiko host = "THEHOST.com" #hard-coded port = 22 transport = paramiko.Transport((host, port)) password = "THEPASSWORD" #hard-coded username = "THEUSERNAME" #hard-coded transport.connect(username = username, password = password) sftp = paramiko.SFTPClient.from_transport(transport) import sys path = './THETARGETDIRECTORY/' + sys.argv[1] #hard-coded localpath = sys.argv[1] sftp.put(localpath, path) sftp.close() transport.close() print 'Upload done.' Thanks again!

    Read the article

  • Text in gtk.ComboBox without active item

    - by Yotam
    The following PyGTk code, gives a combo-box without an active item. This serves a case where we do not want to have a default, and force the user to select. Still, is there a way to have the empty combo-bar show something like: "Select an item..." without adding a dummy item? import gtk import sys say = sys.stdout.write def cb_changed(w): say("Active index=%d\n" % w.get_active()) topwin = gtk.Window() topwin.set_title("No Default") topwin.set_size_request(0x100, 0x20) topwin.connect('delete-event', gtk.main_quit) vbox = gtk.VBox() ls = gtk.ListStore(str, str) combo = gtk.ComboBox(ls) cell = gtk.CellRendererText() combo.pack_start(cell) combo.add_attribute(cell, 'text', 0) combo.connect('changed', cb_changed) ls.clear() map(lambda i: ls.append(["Item-%d" % i, "Id%d" % i]), range(3)) vbox.pack_start(combo, padding=2) topwin.add(vbox) topwin.show_all() gtk.main() say("%s Exiting\n" % sys.argv[0]) sys.exit(0)

    Read the article

  • Adding Timestamp to Java's GC messages in Tomcat 6

    - by ripper234
    I turned on Java's GC log options -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintGCDetails Which print out these messages to standard output (catalina.out): 314.884: [CMS-concurrent-mark-start] 315.014: [CMS-concurrent-mark: 0.129/0.129 secs] [Times: user=0.14 sys=0.00, real=0.13 secs] 315.014: [CMS-concurrent-preclean-start] 315.016: [CMS-concurrent-preclean: 0.003/0.003 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 315.016: [CMS-concurrent-abortable-preclean-start] 332.055: [GC 332.055: [ParNew: 17128K->84K(19136K), 0.0017700 secs] 88000K->70956K(522176K) icms_dc=4 , 0.0018660 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] CMS: abort preclean due to time 352.253: [CMS-concurrent-abortable-preclean: 0.023/37.237 secs] [Times: user=0.78 sys=0.02, real=37.23 secs] How can I make these log lines appear with an actual timestamp (including date) instead of these numbers, which presumably mean "time since JVM started" ?

    Read the article

  • Python: Run a progess bar and work simultaneously?

    - by DanielTA
    I want to know how to run a progress bar and some other work simultaneously, then when the work is done, stop the progress bar in Python (2.7.x) import sys, time def progress_bar(): while True: for c in ['-','\\','|','/']: sys.stdout.write('\r' + "Working " + c) sys.stdout.flush() time.sleep(0.2) def work(): *doing hard work* How would I be able to do something like: progress_bar() #run in background? work() *stop progress bar* print "\nThe work is done!"

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • How to install SpatiaLite 3 on 12.04

    - by Terra
    1) sudo apt-get install libsqlite3-dev libgeos-dev 2) libspatialite-3.0.0-stable$ ./configure Result: configure: error: cannot find proj_api.h, bailing out checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking how to run the C preprocessor... gcc -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for stdlib.h... (cached) yes checking stdio.h usability... yes checking stdio.h presence... yes checking for stdio.h... yes checking for string.h... (cached) yes checking for memory.h... (cached) yes checking math.h usability... yes checking math.h presence... yes checking for math.h... yes checking float.h usability... yes checking float.h presence... yes checking for float.h... yes checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking for inttypes.h... (cached) yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdint.h... (cached) yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking for unistd.h... (cached) yes checking sqlite3.h usability... yes checking sqlite3.h presence... yes checking for sqlite3.h... yes checking sqlite3ext.h usability... yes checking sqlite3ext.h presence... yes checking for sqlite3ext.h... yes checking for g++... no checking for c++... no checking for gpp... no checking for aCC... no checking for CC... no checking for cxx... no checking for cc++... no checking for cl.exe... no checking for FCC... no checking for KCC... no checking for RCC... no checking for xlC_r... no checking for xlC... no checking whether we are using the GNU C++ compiler... no checking whether g++ accepts -g... no checking dependency style of g++... none checking for gcc... (cached) gcc checking whether we are using the GNU C compiler... (cached) yes checking whether gcc accepts -g... (cached) yes checking for gcc option to accept ISO C89... (cached) none needed checking dependency style of gcc... (cached) gcc3 checking how to run the C preprocessor... gcc -E checking whether ln -s works... yes checking whether make sets $(MAKE)... (cached) yes checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking how to print strings... printf checking for a sed that does not truncate output... /bin/sed checking for fgrep... /bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking the maximum length of command line arguments... 1572864 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking how to convert i686-pc-linux-gnu file names to i686-pc-linux-gnu format... func_convert_file_noop checking how to convert i686-pc-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... dlltool checking how to associate runtime and link libraries... printf %s\n checking for ar... ar checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking for sysroot... no checking for mt... mt checking if mt is a manifest tool... no checking for dlfcn.h... yes checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... yes checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking for an ANSI C-conforming const... yes checking for off_t... yes checking for size_t... yes checking whether time.h and sys/time.h may both be included... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for working volatile... yes checking whether lstat correctly handles trailing slash... yes checking whether lstat accepts an empty string... no checking whether lstat correctly handles trailing slash... (cached) yes checking for working memcmp... yes checking whether stat accepts an empty string... no checking for strftime... yes checking for memset... yes checking for sqrt... no checking for strcasecmp... yes checking for strerror... yes checking for strncasecmp... yes checking for strstr... yes checking for fdatasync... yes checking for ftruncate... yes checking for getcwd... yes checking for gettimeofday... yes checking for localtime_r... yes checking for memmove... yes checking for strerror... (cached) yes checking for sqlite3_prepare_v2 in -lsqlite3... yes checking for sqlite3_rtree_geometry_callback in -lsqlite3... yes checking proj_api.h usability... no checking proj_api.h presence... no checking for proj_api.h... no configure: error: cannot find proj_api.h, bailing out

    Read the article

  • GeoIP and Nginx

    - by JavierMartinez
    I have a nginx with geoip, but it is not working rightly. The issue is the next: Nginx are getting geodata from $_SERVER['REMOTE_ADDR'] instead of $_SERVER['HTTP_X_HAPROXY_IP'], which have the real client ip. So, the reported geodata belongs to my server ip instead of client ip. Does anybody where could be the error to fix it? Nginx version and compiled modules: nginx -V nginx version: nginx/1.2.3 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log- path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-debug --with-file-aio --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-auth-pam --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-echo --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-upstream-fair --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-dav-ext-module --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-syslog --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-cache-purge nginx site conf (frontend machine) server { root /var/www/storage; server_name ~^.*(\.)?mydomain.com$; if ($host ~ ^(.*)\.mydomain\.com$) { set $new_host $1.mydomain.com; } if ($host !~ ^(.*)\.mydomain\.com$) { set $new_host www.mydomain.com; } add_header Staging true; real_ip_header X-HAProxy-IP; set_real_ip_from 10.5.0.10/32; location /files { expires 30d; if ($uri !~ ^/files/([a-fA-F0-9]+)_(220|45)\.jpg$) { return 403; } rewrite ^/files/([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9]+)_(220|45)\.jpg$ /files/$1/$2/$3/$4/$1$2$3$4$5_$6.jpg break; try_files $uri @to_backend; } location /assets { if ($uri ~ ^/assets/r([a-zA-Z0-9]+[^/])(/(css|js|fonts)/.*)) { rewrite ^/assets/r([a-zA-Z0-9]+[^/])/(css|js|fonts)/(.*)$ /assets/$2/$3 break; } try_files $uri @to_backend; } location / { proxy_set_header Host $new_host; proxy_set_header X-HAProxy-IP $remote_addr; proxy_pass http://10.5.0.10:8080; } location @to_backend { proxy_set_header Host $new_host; proxy_pass http://10.5.0.10:8080; } } nginx.conf (backend machine) http{ ... ## # GeoIP Config ## geoip_country /etc/nginx/geoip/GeoIP.dat; # the country IP database geoip_city /etc/nginx/geoip/GeoLiteCity.dat; # the city IP database ... } fastcgi_params (backend machine) ### SET GEOIP Variables ### fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; fastcgi_param GEOIP_COUNTRY_CODE3 $geoip_country_code3; fastcgi_param GEOIP_COUNTRY_NAME $geoip_country_name; fastcgi_param GEOIP_CITY_COUNTRY_CODE $geoip_city_country_code; fastcgi_param GEOIP_CITY_COUNTRY_CODE3 $geoip_city_country_code3; fastcgi_param GEOIP_CITY_COUNTRY_NAME $geoip_city_country_name; fastcgi_param GEOIP_REGION $geoip_region; fastcgi_param GEOIP_CITY $geoip_city; fastcgi_param GEOIP_POSTAL_CODE $geoip_postal_code; fastcgi_param GEOIP_CITY_CONTINENT_CODE $geoip_city_continent_code; fastcgi_param GEOIP_LATITUDE $geoip_latitude; fastcgi_param GEOIP_LONGITUDE $geoip_longitude; haproxy.conf (frontend machine) defaults log global option forwardfor option httpclose mode http retries 3 option redispatch maxconn 4096 contimeout 100000 clitimeout 100000 srvtimeout 100000 listen cluster_webs *:8080 mode http option tcpka option httpchk option httpclose option forwardfor balance roundrobin server backend-stage 10.5.0.11:80 weight 1 $_SERVER dump: http://paste.laravel.com/7dy Where 10.5.0.10 is frontend private ip and 10.5.0.11 backend private ip

    Read the article

  • Linux fsck.ext3 says "Device or resource busy" although I did not mount the disk.

    - by matnagel
    I am running an ubuntu 8.04 server instance with a 8GB virtual disk on vmware 1.0.9. For disk maintenance I made a copy of the virtual disk (by making a copy of the 2 vmdk files of sda on the stopped vm on the host) and added it to the original vm. Now this vm has it's original virtual disk sda plus a 1:1 copy (sdd). There are 2 additional disk sdb and sdc which I ignore.) I would expect sdb not to be mounted when I start the vm. So I try tp do a ext2 fsck on sdd from the running vm, but it reports fsck reported that sdb was mounted. $ sudo fsck.ext3 -b 8193 /dev/sdd e2fsck 1.40.8 (13-Mar-2008) fsck.ext3: Device or resource busy while trying to open /dev/sdd Filesystem mounted or opened exclusively by another program? The "mount" command does not tell me sdd is mounted: $ sudo mount /dev/sda1 on / type ext3 (rw,relatime,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) /sys on /sys type sysfs (rw,noexec,nosuid,nodev) varrun on /var/run type tmpfs (rw,noexec,nosuid,nodev,mode=0755) varlock on /var/lock type tmpfs (rw,noexec,nosuid,nodev,mode=1777) udev on /dev type tmpfs (rw,mode=0755) devshm on /dev/shm type tmpfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdc1 on /mnt/r1 type ext3 (rw,relatime,errors=remount-ro) /dev/sdb1 on /mnt/k1 type ext3 (rw,relatime,errors=remount-ro) securityfs on /sys/kernel/security type securityfs (rw) When I ignore the warning and continue the fsck, it reported many errors. How do I get this under control? Is there a better way to figure out if sdd is mounted? Or how is it "busy? How to unmount it then? How to prevent ubuntu from automatically mounting. Or is there something else I am missing? Also from /var/log/syslog I cannot see it is mounted, this is the last part of the startup sequence: kernel: [ 14.229494] ACPI: Power Button (FF) [PWRF] kernel: [ 14.230326] ACPI: AC Adapter [ACAD] (on-line) kernel: [ 14.460136] input: PC Speaker as /devices/platform/pcspkr/input/input3 kernel: [ 14.639366] udev: renamed network interface eth0 to eth1 kernel: [ 14.670187] eth1: link up kernel: [ 16.329607] input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/ kernel: [ 16.367540] parport_pc 00:08: reported by Plug and Play ACPI kernel: [ 16.367670] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE] kernel: [ 19.425637] NET: Registered protocol family 10 kernel: [ 19.437550] lo: Disabled Privacy Extensions kernel: [ 24.328857] loop: module loaded kernel: [ 24.449293] lp0: using parport0 (interrupt-driven). kernel: [ 26.075499] EXT3 FS on sda1, internal journal kernel: [ 28.380299] kjournald starting. Commit interval 5 seconds kernel: [ 28.381706] EXT3 FS on sdc1, internal journal kernel: [ 28.381747] EXT3-fs: mounted filesystem with ordered data mode. kernel: [ 28.444867] kjournald starting. Commit interval 5 seconds kernel: [ 28.445436] EXT3 FS on sdb1, internal journal kernel: [ 28.445444] EXT3-fs: mounted filesystem with ordered data mode. kernel: [ 31.309766] eth1: no IPv6 routers present kernel: [ 35.054268] ip_tables: (C) 2000-2006 Netfilter Core Team mysqld_safe[4367]: started mysqld[4370]: 100124 14:40:21 InnoDB: Started; log sequence number 0 10130914 mysqld[4370]: 100124 14:40:21 [Note] /usr/sbin/mysqld: ready for connections. mysqld[4370]: Version: '5.0.51a-3ubuntu5.4' socket: '/var/run/mysqld/mysqld.sock' port: 3 /etc/mysql/debian-start[4417]: Upgrading MySQL tables if necessary. /etc/mysql/debian-start[4422]: Looking for 'mysql' in: /usr/bin/mysql /etc/mysql/debian-start[4422]: Looking for 'mysqlcheck' in: /usr/bin/mysqlcheck /etc/mysql/debian-start[4422]: This installation of MySQL is already upgraded to 5.0.51a, u /etc/mysql/debian-start[4436]: Checking for insecure root accounts. /etc/mysql/debian-start[4444]: Checking for crashed MySQL tables.

    Read the article

  • ufw portforwarding to virtualbox guest

    - by user85116
    My goal is to be able to connect using remote desktop on my desktop machine, to windows xp running in virtualbox on my linux server. My setup: server = debian squeeze, 64 bit, with a public IP address (host) virtualbox-ose 3.2.10 (from debian repo) windows xp running inside VBox as a guest; bridged networking mode in VBox, ip = 192.168.1.100 ufw as the firewall on debian, 3 ports are opened: 22 / ssh, 80 / apache, and 3389 for remote desktop My problem: If I try to use remote desktop on my home computer, I am unable to connect to the windows guest. If I first "ssh -X -C" into the debian server, then run "rdesktop 192.168.1.100", I am able to connect without issue. The windows firewall was configured to allow remote desktop connections, and I've even turned it off (as it is redundant here) to see if that was the problem but it made no difference. Since I am able to connect from inside the local subnet, I suspect that I have not setup my debian firewall correctly to handle connections from outside the LAN. Here is what I've done... First my ufw status: ufw status Status: active To Action From -- ------ ---- 22 ALLOW Anywhere 80 ALLOW Anywhere 3389 ALLOW Anywhere I edited /etc/ufw/sysctl.conf and added: net/ipv4/ip_forward=1 Edited /etc/default/ufw and added: DEFAULT_FORWARD_POLICY="ACCEPT" Edited /etc/ufw/before.rules and added: # setup port forwarding to forward rdp to windows VM *nat :PREROUTING - [0:0] -A PREROUTING -i eth0 -p tcp --dport 3389 -j DNAT --to-destination 192.168.1.100 -A PREROUTING -i eth0 -p udp --dport 3389 -j DNAT --to-destination 192.168.1.100 COMMIT # Don't delete these required lines, otherwise there will be errors *filter <snip> Restarted the firewall etc., but no connection. My log files on the debian host show this (my public ip address was removed for this posting but it is correct in the actual log): Feb 6 11:11:21 localhost kernel: [171991.856941] [UFW AUDIT] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27518 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:21 localhost kernel: [171991.856963] [UFW ALLOW] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27518 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:24 localhost kernel: [171994.856701] [UFW AUDIT] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27519 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:24 localhost kernel: [171994.856723] [UFW ALLOW] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27519 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:30 localhost kernel: [172000.856656] [UFW AUDIT] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27520 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Feb 6 11:11:30 localhost kernel: [172000.856678] [UFW ALLOW] IN=eth0 OUT=eth0 SRC=aaa.bbb.ccc.dd DST=192.168.1.100 LEN=60 TOS=0x00 PREC=0x00 TTL=45 ID=27520 DF PROTO=TCP SPT=54201 DPT=3389 WINDOW=5840 RES=0x00 SYN URGP=0 Although this is the current setup / configuration, I've also tried several variations of this; I thought maybe the ISP would be blocking 3389 for some reason and tried using different ports, but again there was no connection. Any ideas...? Did I forget to modify some file somewhere?

    Read the article

  • apache+mod_wsgi configuration for django project(s) on a quad core

    - by Stefano
    I've been experiment quite some time with a "typical" django setting upon nginx+apache2+mod_wsgi+memcached(+postgresql) (reading the doc and some questions on SO and SF, see comments) Since I'm still unsatisfied with the behavior (definitely because of some bad misconfiguration on my part) I would like to know what a good configuration would look like with these hypotesis: Quad-Core Xeon 2.8GHz 8 gigs memory several django projects (anything special related to this?) These are excerpts form my current confs: apache2 SetEnv VHOST null #WSGIPythonOptimize 2 <VirtualHost *:8082> ServerName subdomain.domain.com ServerAlias www.domain.com SetEnv VHOST subdomain.domain AddDefaultCharset UTF-8 ServerSignature Off LogFormat "%{X-Real-IP}i %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" custom ErrorLog /home/project1/var/logs/apache_error.log CustomLog /home/project1/var/logs/apache_access.log custom AllowEncodedSlashes On WSGIDaemonProcess subdomain.domain user=www-data group=www-data threads=25 WSGIScriptAlias / /home/project1/project/wsgi.py WSGIProcessGroup %{ENV:VHOST} </VirtualHost> wsgi.py import os import sys # setting all the right paths.... _realpath = os.path.realpath(os.path.dirname(__file__)) _public_html = os.path.normpath(os.path.join(_realpath, '../')) sys.path.append(_realpath) sys.path.append(os.path.normpath(os.path.join(_realpath, 'apps'))) sys.path.append(os.path.normpath(_public_html)) sys.path.append(os.path.normpath(os.path.join(_public_html, 'libs'))) sys.path.append(os.path.normpath(os.path.join(_public_html, 'django'))) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' import django.core.handlers.wsgi _application = django.core.handlers.wsgi.WSGIHandler() def application(environ, start_response): """ Launches django passing over some environment (domain name) settings """ application_group = environ['mod_wsgi.application_group'] """ wsgi application group is required. It's also used to generate the HOST.DOMAIN.TLD:PORT parameters to pass over """ assert application_group fields = application_group.replace('|', '').split(':') server_name = fields[0] os.environ['WSGI_APPLICATION_GROUP'] = application_group os.environ['WSGI_SERVER_NAME'] = server_name if len(fields) > 1 : os.environ['WSGI_PORT'] = fields[1] splitted = server_name.rsplit('.', 2) assert splitted >= 2 splited.reverse() if len(splitted) > 0 : os.environ['WSGI_TLD'] = splitted[0] if len(splitted) > 1 : os.environ['WSGI_DOMAIN'] = splitted[1] if len(splitted) > 2 : os.environ['WSGI_HOST'] = splitted[2] return _application(environ, start_response)` folder structure in case it matters (slightly shortened actually) /home/www-data/projectN/var/logs /project (contains manage.py, wsgi.py, settings.py) /project/apps (all the project ups are here) /django /libs Please forgive me in advance if I overlooked something obvious. My main question is about the apache2 wsgi settings. Are those fine? Is 25 threads an /ok/ number with a quad core for one only django project? Is it still ok with several django projects on different virtual hosts? Should I specify 'process'? Any other directive which I should add? Is there anything really bad in the wsgi.py file? I've been reading about potential issues with the standard wsgi.py file, should I switch to that? Or.. should this conf just be running fine, and I should look for issues somewhere else? So, what do I mean by "unsatisfied": well, I often get quite high CPU WAIT; but what is worse, is that relatively often apache2 gets stuck. It just does not answer anymore, and has to be restarted. I have setup a monit to take care of that, but it ain't a real solution. I have been wondering if it's an issue with the database access (postgresql) under heavy load, but even if it was, why would the apache2 processes get stuck? Beside these two issues, performance is overall great. I even tried New Relic and got very good average results.

    Read the article

  • XenServer Converting HVM to Paravirtualised

    - by Karl Kloppenborg
    Recently I have been tasked with the daunting process of converting a setup of HVM enabled VMs (running on Citrix XenServer 5.6.0) into PV (paravirtualised) containers. The constraints of the project was that: The operating system must be functionally identical after the migration. minimal modification to the operating system (with exception of kernel / drive mapping) I also was allowed to change the bootloader(ie, grub) in what ever way I see fit. However, I have attempted this, I will firstly like to show you my steps I took. This at the moment is CentOS5.5 specific: Steps: yum install kernel-xen This installed: 2.6.18-194.32.1.el5xen edited: /boot/grub/menu.lst changed my specs to match: title CentOS (2.6.18-194.32.1.el5xen) root (hd0,0) kernel /vmlinuz-2.6.18-194.32.1.el5xen ro root=/dev/VolGroup00/LogVol00 console=xvc0 initrd /initrd-2.6.18-194.32.1.el5xen.img Then I changed my xenserver parameters to match: xe vm-param-set uuid=[vm uuid] PV-bootloader-args="--kernel /vmlinuz-2.6.18-194.32.1.el5xen --ramdisk /initrd-2.6.18-194.32.1.el5xen.img" xe vm-param-set uuid=[vm uuid] HVM-boot-policy="" xe vm-param-set uuid=[vm uuid] PV-bootloader=pygrub xe vbd-param-set uuid==[Virtual Block Device/VBD uuid] bootable=true Some things to note, I am running a VolGroup LVM ;) Anyways, after all these steps (which aren't much!) I boot the VM and it boots initial kernel just fine, however I am presented with this error: Boot Screen: device-mapper: dm-raid45: initialized v0.2594l Waiting for driver initialization. Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Activating logical volumes Volume group "VolGroup00" not found Creating root device. Mounting root filesystem. mount: could not find filesystem '/dev/root' Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Now my hints are that it cannot detect / because of the fact that when you change from HVM mode to PV it does something (not that obvious) When you make a SR (storage) on a HVM, you get it mounted to the guest os as /dev/hda. However in PV mode, this presents itself as /dev/xvda... Could this be the answer? and if so, how the heck to I implement it?? Update: So I have gotten a bit further in my quest, as it now detects the LVM's... To do this, I required to recompile the xen-kernel initrd image. Command: mkinitrd -v --builtin=xen_vbd --preload=xenblk initrd-2.6.18-194.32.1.el5xen.img 2.6.18-194.32.1.el5xen Now when I boot I get this: Boot Screen: Loading dm-raid45.ko module device-mapper: dm-raid45: initialized v0.2594l Scanning and configuring dmraid supported devices Scanning logical volumes Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 Activating logical volumes 3 logical volume(s) in volume group "VolGroup00" now active Creating root device. Mounting root filesystem. mount: error mounting /dev/root on /sysroot as ext3: Device or resource busy Setting up other filesystems. Setting up new root fs setuproot: moving /dev failed: No such file or directory no fstab.sys, mounting internal defaults setuproot: error mounting /proc: No such file or directory setuproot: error mounting /sys: No such file or directory Switching to new root and running init. unmounting old /dev unmounting old /proc unmounting old /sys switchroot: mount failed: No such file or directory Kernel panic - not syncing: Attempted to kill init!

    Read the article

  • Setup access to SAS RAID drives with NTFS partitions on CentOS Machine

    - by Quanano
    We have a Dell Poweredge 2900 system with Adaptec 39320A SCSI CONTROLLER CARD and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on the other raid array with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log

    Read the article

  • iptables rule(s) to send openvpn traffic from clients over an sshuttle tunnel?

    - by Sam Martin
    I have an Ubuntu 12.04 box with OpenVPN. The VPN is working as expected -- clients can connect, browse the Web, etc. The OpenVPN server IP is 10.8.0.1 on tun0. On that same box, I can use sshuttle to tunnel into another network to access a Web server on 10.10.0.9. sshuttle does its magic using the following iptables commands: iptables -t nat -N sshuttle-12300 iptables -t nat -F sshuttle-12300 iptables -t nat -I OUTPUT 1 -j sshuttle-12300 iptables -t nat -I PREROUTING 1 -j sshuttle-12300 iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.10.0.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42 iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.0/8 -p tcp Is it possible to forward traffic from OpenVPN clients over the sshuttle tunnel to the remote Web server? I'd ultimately like to be able to set up any complicated tunneling on the server, and have relatively "dumb" clients (iPad, etc.) be able to access the remote servers via OpenVPN. Below is a basic diagram of the scenario: [Edit: added output from the OpenVPN box] $ sudo iptables -nL -v -t nat Chain PREROUTING (policy ACCEPT 1498 packets, 252K bytes) pkts bytes target prot opt in out source destination 1512 253K sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 322 packets, 58984 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 584 packets, 43241 bytes) pkts bytes target prot opt in out source destination 587 43421 sshuttle-12300 all -- * * 0.0.0.0/0 0.0.0.0/0 Chain POSTROUTING (policy ACCEPT 589 packets, 43595 bytes) pkts bytes target prot opt in out source destination 1175 76298 MASQUERADE all -- * eth0 10.8.0.0/24 0.0.0.0/0 Chain sshuttle-12300 (2 references) pkts bytes target prot opt in out source destination 17 1076 REDIRECT tcp -- * * 0.0.0.0/0 10.10.0.0/24 TTL match TTL != 42 redir ports 12300 0 0 RETURN tcp -- * * 0.0.0.0/0 127.0.0.0/8 $ sudo iptables -nL -v -t filter Chain INPUT (policy ACCEPT 97493 packets, 30M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 131K 109M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1370 89160 ACCEPT all -- * * 10.8.0.0/24 0.0.0.0/0 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable [Edit 2: more OpenVPN server output] $ netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 [Edit 3: still more debug output] IP forwarding appears to be enabled correctly on the OpenVPN server: # find /proc/sys/net/ipv4/conf/ -name forwarding -ls -execdir cat {} \; 18926 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/all/forwarding 1 18954 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/default/forwarding 1 18978 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/eth0/forwarding 1 19003 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/lo/forwarding 1 19028 0 -rw-r--r-- 1 root root 0 Mar 5 13:31 /proc/sys/net/ipv4/conf/tun0/forwarding 1 Client routing table: $ netstat -r Routing tables Internet: Destination Gateway Flags Refs Use Netif Expire 0/1 10.8.0.5 UGSc 8 48 tun0 default 192.168.1.1 UGSc 2 1652 en1 10.8.0.1/32 10.8.0.5 UGSc 1 0 tun0 10.8.0.5 10.8.0.6 UHr 13 0 tun0 10.10.0/24 10.8.0.5 UGSc 0 0 tun0 <snip> Traceroute from client: $ traceroute 10.10.0.9 traceroute to 10.10.0.9 (10.10.0.9), 64 hops max, 52 byte packets 1 10.8.0.1 (10.8.0.1) 5.403 ms 1.173 ms 1.086 ms 2 192.168.1.1 (192.168.1.1) 4.693 ms 2.110 ms 1.990 ms 3 l100.my-verizon-garbage (client-ext-ip) 7.453 ms 7.089 ms 6.248 ms 4 * * * 5 10.10.0.9 (10.10.0.9) 14.915 ms !N * 6.620 ms !N

    Read the article

  • SQL SERVER – Index Created on View not Used Often – Observation of the View – Part 2

    - by pinaldave
    Earlier, I have written an article about SQL SERVER – Index Created on View not Used Often – Observation of the View. I received an email from one of the readers, asking if there would no problems when we create the Index on the base table. Well, we need to discuss this situation in two different cases. Before proceeding to the discussion, I strongly suggest you read my earlier articles. To avoid the duplication, I am not going to repeat the code and explanation over here. In all the earlier cases, I have explained in detail how Index created on the View is not utilized. SQL SERVER – Index Created on View not Used Often – Limitation of the View 12 SQL SERVER – Index Created on View not Used Often – Observation of the View SQL SERVER – Indexed View always Use Index on Table As per earlier blog posts, so far we have done the following: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View However, the blog reader who emailed me suggests the extension of the said logic, which is as follows: Create a Table Create a View Create Index On View Write SELECT with ORDER BY on View Create Index on the Base Table Write SELECT with ORDER BY on View After doing the last two steps, the question is “Will the query on the View utilize the Index on the View, or will it still use the Index of the base table?“ Let us first run the Create example. USE tempdb GO IF EXISTS (SELECT * FROM sys.views WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[SampleView]')) DROP VIEW [dbo].[SampleView] GO IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[mySampleTable]') AND TYPE IN (N'U')) DROP TABLE [dbo].[mySampleTable] GO -- Create SampleTable CREATE TABLE mySampleTable (ID1 INT, ID2 INT, SomeData VARCHAR(100)) INSERT INTO mySampleTable (ID1,ID2,SomeData) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY o1.name), ROW_NUMBER() OVER (ORDER BY o2.name), o2.name FROM sys.all_objects o1 CROSS JOIN sys.all_objects o2 GO -- Create View CREATE VIEW SampleView WITH SCHEMABINDING AS SELECT ID1,ID2,SomeData FROM dbo.mySampleTable GO -- Create Index on View CREATE UNIQUE CLUSTERED INDEX [IX_ViewSample] ON [dbo].[SampleView] ( ID2 ASC ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO -- Create Index on Original Table -- On Column ID1 CREATE UNIQUE CLUSTERED INDEX [IX_OriginalTable] ON mySampleTable ( ID1 ASC ) GO -- On Column ID2 CREATE UNIQUE NONCLUSTERED INDEX [IX_OriginalTable_ID2] ON mySampleTable ( ID2 ) GO -- Select from view SELECT ID1,ID2,SomeData FROM SampleView ORDER BY ID2 GO Now let us see the execution plans for both of the SELECT statement. Before Index on Base Table (with Index on View): After Index on Base Table (with Index on View): Looking at both executions, it is very clear that with or without, the View is using Indexes. Alright, I have written 11 disadvantages of the Views. Now I have written one case where the View is using Indexes. Anybody who says that I am being harsh on Views can say now that I found one place where Index on View can be helpful. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL View, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >