Search Results

Search found 6654 results on 267 pages for 'socket io'.

Page 74/267 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Node.js Adventure - Node.js on Windows

    - by Shaun
    Two weeks ago I had had a talk with Wang Tao, a C# MVP in China who is currently running his startup company and product named worktile. He asked me to figure out a synchronization solution which helps his product in the future. And he preferred me implementing the service in Node.js, since his worktile is written in Node.js. Even though I have some experience in ASP.NET MVC, HTML, CSS and JavaScript, I don’t think I’m an expert of JavaScript. In fact I’m very new to it. So it scared me a bit when he asked me to use Node.js. But after about one week investigate I have to say Node.js is very easy to learn, use and deploy, even if you have very limited JavaScript skill. And I think I became love Node.js. Hence I decided to have a series named “Node.js Adventure”, where I will demonstrate my story of learning and using Node.js in Windows and Windows Azure. And this is the first one.   (Brief) Introduction of Node.js I don’t want to have a fully detailed introduction of Node.js. There are many resource on the internet we can find. But the best one is its homepage. Node.js was created by Ryan Dahl, sponsored by Joyent. It’s consist of about 80% C/C++ for core and 20% JavaScript for API. It utilizes CommonJS as the module system which we will explain later. The official definition of Node.js is Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. First of all, Node.js utilizes JavaScript as its development language and runs on top of V8 engine, which is being used by Chrome. It brings JavaScript, a client-side language into the backend service world. So many people said, even though not that actually, “Node.js is a server side JavaScript”. Additionally, Node.js uses an event-driven, non-blocking IO model. This means in Node.js there’s no way to block currently working thread. Every operation in Node.js executed asynchronously. This is a huge benefit especially if our code needs IO operations such as reading disks, connect to database, consuming web service, etc.. Unlike IIS or Apache, Node.js doesn’t utilize the multi-thread model. In Node.js there’s only one working thread serves all users requests and resources response, as the ST star in the figure below. And there is a POSIX async threads pool in Node.js which contains many async threads (AT stars) for IO operations. When a user have an IO request, the ST serves it but it will not do the IO operation. Instead the ST will go to the POSIX async threads pool to pick up an AT, pass this operation to it, and then back to serve any other requests. The AT will actually do the IO operation asynchronously. Assuming before the AT complete the IO operation there is another user comes. The ST will serve this new user request, pick up another AT from the POSIX and then back. If the previous AT finished the IO operation it will take the result back and wait for the ST to serve. ST will take the response and return the AT to POSIX, and then response to the user. And if the second AT finished its job, the ST will response back to the second user in the same way. As you can see, in Node.js there’s only one thread serve clients’ requests and POSIX results. This thread looping between the users and POSIX and pass the data back and forth. The async jobs will be handled by POSIX. This is the event-driven non-blocking IO model. The performance of is model is much better than the multi-threaded blocking model. For example, Apache is built in multi-threaded blocking model while Nginx is in event-driven non-blocking mode. Below is the performance comparison between them. And below is the memory usage comparison between them. These charts are captured from the video NodeJS Basics: An Introductory Training, which presented at Cloud Foundry Developer Advocate.   Node.js on Windows To execute Node.js application on windows is very simple. First of you we need to download the latest Node.js platform from its website. After installed, it will register its folder into system path variant so that we can execute Node.js at anywhere. To confirm the Node.js installation, just open up a command windows and type “node”, then it will show the Node.js console. As you can see this is a JavaScript interactive console. We can type some simple JavaScript code and command here. To run a Node.js JavaScript application, just specify the source code file name as the argument of the “node” command. For example, let’s create a Node.js source code file named “helloworld.js”. Then copy a sample code from Node.js website. 1: var http = require("http"); 2:  3: http.createServer(function (req, res) { 4: res.writeHead(200, {"Content-Type": "text/plain"}); 5: res.end("Hello World\n"); 6: }).listen(1337, "127.0.0.1"); 7:  8: console.log("Server running at http://127.0.0.1:1337/"); This code will create a web server, listening on 1337 port and return “Hello World” when any requests come. Run it in the command windows. Then open a browser and navigate to http://localhost:1337/. As you can see, when using Node.js we are not creating a web application. In fact we are likely creating a web server. We need to deal with request, response and the related headers, status code, etc.. And this is one of the benefit of using Node.js, lightweight and straightforward. But creating a website from scratch again and again is not acceptable. The good news is that, Node.js utilizes CommonJS as its module system, so that we can leverage some modules to simplify our job. And furthermore, there are about ten thousand of modules available n the internet, which covers almost all areas in server side application development.   NPM and Node.js Modules Node.js utilizes CommonJS as its module system. A module is a set of JavaScript files. In Node.js if we have an entry file named “index.js”, then all modules it needs will be located at the “node_modules” folder. And in the “index.js” we can import modules by specifying the module name. For example, in the code we’ve just created, we imported a module named “http”, which is a build-in module installed alone with Node.js. So that we can use the code in this “http” module. Besides the build-in modules there are many modules available at the NPM website. Thousands of developers are contributing and downloading modules at this website. Hence this is another benefit of using Node.js. There are many modules we can use, and the numbers of modules increased very fast, and also we can publish our modules to the community. When I wrote this post, there are totally 14,608 modules at NPN and about 10 thousand downloads per day. Install a module is very simple. Let’s back to our command windows and input the command “npm install express”. This command will install a module named “express”, which is a MVC framework on top of Node.js. And let’s create another JavaScript file named “helloweb.js” and copy the code below in it. I imported the “express” module. And then when the user browse the home page it will response a text. If the incoming URL matches “/Echo/:value” which the “value” is what the user specified, it will pass it back with the current date time in JSON format. And finally my website was listening at 12345 port. 1: var express = require("express"); 2: var app = express(); 3:  4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: app.get("/Echo/:value", function(req, res) { 9: var value = req.params.value; 10: res.json({ 11: "Value" : value, 12: "Time" : new Date() 13: }); 14: }); 15:  16: console.log("Web application opened."); 17: app.listen(12345); For more information and API about the “express”, please have a look here. Start our application from the command window by command “node helloweb.js”, and then navigate to the home page we can see the response in the browser. And if we go to, for example http://localhost:12345/Echo/Hello Shaun, we can see the JSON result. The “express” module is very populate in NPM. It makes the job simple when we need to build a MVC website. There are many modules very useful in NPM. - underscore: A utility module covers many common functionalities such as for each, map, reduce, select, etc.. - request: A very simple HTT request client. - async: Library for coordinate async operations. - wind: Library which enable us to control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps.   Node.js and IIS I demonstrated how to run the Node.js application from console. Since we are in Windows another common requirement would be, “can I host Node.js in IIS?” The answer is “Yes”. Tomasz Janczuk created a project IISNode at his GitHub space we can find here. And Scott Hanselman had published a blog post introduced about it.   Summary In this post I provided a very brief introduction of Node.js, includes it official definition, architecture and how it implement the event-driven non-blocking model. And then I described how to install and run a Node.js application on windows console. I also described the Node.js module system and NPM command. At the end I referred some links about IISNode, an IIS extension that allows Node.js application runs on IIS. Node.js became a very popular server side application platform especially in this year. By leveraging its non-blocking IO model and async feature it’s very useful for us to build a highly scalable, asynchronously service. I think Node.js will be used widely in the cloud application development in the near future.   In the next post I will explain how to use SQL Server from Node.js.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Eventlet client-server

    - by johannix
    I'm trying to setup a server-client interraction: Server gets a string from client Server returns data to client I got this working with socket.send and socket.recv, but was wondering how to get it working with files through socket.makefile. I've got most of the pieces working, but don't know how to force each of the pieces to read at the appropriate time. So what it looks like is happening is that both the client and server are reading on the socket at the same time. Any ideas on how to let the server complete the original read before the client starts waiting for its information? Below are the core pieces that reproduce the read lock. Server: def handle(green_socket): file_handler = green_socket.makefile('rw') print 'before server read' lines = file_handler.readlines() print "readlines: %s" % (lines) print '** Server started **' server = eventlet.listen((HOST, PORT)) pool = eventlet.GreenPool() while True: new_sock, address = server.accept() pool.spawn_n(handle, new_sock) Client: from eventlet.green import socket green_socket = eventlet.connect((HOST, PORT)) file_handler = green_socket.makefile('rw') print 'before write' file_handler.writelines('client message') file_handler.flush() print 'before client read' lines = file_handler.readlines()

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • USB mouse does not work on boot

    - by Uku Loskit
    My problem is pretty much a duplicate of the one described in USB mouse late to load , but the solution there has not worked for me. I'm running the same OS and experiencing the exact same issue. It disappears after 10 seconds or so. Booting with the options specified in the other question did not fix it :/ Thanks in advance. sheepz@sheepz-desktop:~$ dmesg | egrep "hci|usb" [ 0.188000] usbcore: registered new interface driver usbfs [ 0.188000] usbcore: registered new interface driver hub [ 0.188000] usbcore: registered new device driver usb [ 0.358613] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 0.358627] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 0.358637] uhci_hcd: USB Universal Host Controller Interface driver [ 0.358683] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.358691] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 0.358695] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 0.358726] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 1 [ 0.358758] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000e100 [ 0.358927] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.358932] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 0.358935] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 0.358964] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 2 [ 0.358991] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000e200 [ 0.359132] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.359137] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 0.359139] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 0.359165] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 3 [ 0.359193] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000e300 [ 0.359327] uhci_hcd 0000:00:1d.3: PCI INT D -> GSI 16 (level, low) -> IRQ 16 [ 0.359332] uhci_hcd 0000:00:1d.3: setting latency timer to 64 [ 0.359334] uhci_hcd 0000:00:1d.3: UHCI Host Controller [ 0.359360] uhci_hcd 0000:00:1d.3: new USB bus registered, assigned bus number 4 [ 0.359387] uhci_hcd 0000:00:1d.3: irq 16, io base 0x0000e400 [ 0.731933] usb 1-1: new full speed USB device using uhci_hcd and address 2 [ 1.023859] usb 1-2: new full speed USB device using uhci_hcd and address 3 [ 16.136175] usb 1-2: device descriptor read/64, error -110 [ 31.352481] usb 1-2: device descriptor read/64, error -110 [ 31.568485] usb 1-2: new full speed USB device using uhci_hcd and address 4 [ 46.680794] usb 1-2: device descriptor read/64, error -110 [ 61.903555] usb 1-2: device descriptor read/64, error -110 [ 62.119671] usb 1-2: new full speed USB device using uhci_hcd and address 5 [ 72.541078] usb 1-2: device not accepting address 5, error -110 [ 72.653194] usb 1-2: new full speed USB device using uhci_hcd and address 6 [ 83.066637] usb 1-2: device not accepting address 6, error -110 [ 83.178615] usb 3-1: new low speed USB device using uhci_hcd and address 2 [ 83.562546] usbcore: registered new interface driver hiddev [ 83.578827] input: Logitech USB-PS/2 Optical Mouse as /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input3 [ 83.579016] generic-usb 0003:046D:C01D.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB-PS/2 Optical Mouse] on usb-0000:00:1d.2-1/input0 [ 83.579244] usbcore: registered new interface driver usbhid [ 83.579246] usbhid: USB HID core driver [114025.224407] usb 3-1: USB disconnect, address 2 sheepz@sheepz-desktop:~$ dmesg | egrep "hci|usb" [ 0.188000] usbcore: registered new interface driver usbfs [ 0.188000] usbcore: registered new interface driver hub [ 0.188000] usbcore: registered new device driver usb [ 0.358613] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 0.358627] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 0.358637] uhci_hcd: USB Universal Host Controller Interface driver [ 0.358683] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.358691] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 0.358695] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 0.358726] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 1 [ 0.358758] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000e100 [ 0.358927] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.358932] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 0.358935] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 0.358964] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 2 [ 0.358991] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000e200 [ 0.359132] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.359137] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 0.359139] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 0.359165] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 3 [ 0.359193] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000e300 [ 0.359327] uhci_hcd 0000:00:1d.3: PCI INT D -> GSI 16 (level, low) -> IRQ 16 [ 0.359332] uhci_hcd 0000:00:1d.3: setting latency timer to 64 [ 0.359334] uhci_hcd 0000:00:1d.3: UHCI Host Controller [ 0.359360] uhci_hcd 0000:00:1d.3: new USB bus registered, assigned bus number 4 [ 0.359387] uhci_hcd 0000:00:1d.3: irq 16, io base 0x0000e400 [ 0.731933] usb 1-1: new full speed USB device using uhci_hcd and address 2 [ 1.023859] usb 1-2: new full speed USB device using uhci_hcd and address 3 [ 16.136175] usb 1-2: device descriptor read/64, error -110 [ 31.352481] usb 1-2: device descriptor read/64, error -110 [ 31.568485] usb 1-2: new full speed USB device using uhci_hcd and address 4 [ 46.680794] usb 1-2: device descriptor read/64, error -110 [ 61.903555] usb 1-2: device descriptor read/64, error -110 [ 62.119671] usb 1-2: new full speed USB device using uhci_hcd and address 5 [ 72.541078] usb 1-2: device not accepting address 5, error -110 [ 72.653194] usb 1-2: new full speed USB device using uhci_hcd and address 6 [ 83.066637] usb 1-2: device not accepting address 6, error -110 [ 83.178615] usb 3-1: new low speed USB device using uhci_hcd and address 2 [ 83.562546] usbcore: registered new interface driver hiddev [ 83.578827] input: Logitech USB-PS/2 Optical Mouse as /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input3 [ 83.579016] generic-usb 0003:046D:C01D.0001: input,hidraw0: USB HID v1.10 Mouse [Logitech USB-PS/2 Optical Mouse] on usb-0000:00:1d.2-1/input0 [ 83.579244] usbcore: registered new interface driver usbhid [ 83.579246] usbhid: USB HID core driver

    Read the article

  • Storage Configuration

    - by jchang
    Storage performance is not inherently complicated subject. The concepts are relatively simple. In fact, scaling storage performance is far easier compared with the difficulties encounters in scaling processor performance in NUMA systems. Storage performance is achieved by properly distributing IO over: 1) multiple independent PCI-E ports (system memory and IO bandwith is key) 2) multiple RAID controllers or host bus adapters (HBAs) 3) multiple storage IO channels (SAS or FC, complete path) most importantly,...(read more)

    Read the article

  • Storage Configuration

    - by jchang
    Storage performance is not inherently complicated subject. The concepts are relatively simple. In fact, scaling storage performance is far easier compared with the difficulties encounters in scaling processor performance in NUMA systems. Storage performance is achieved by properly distributing IO over: 1) multiple independent PCI-E ports (system memory and IO bandwith is key) 2) multiple RAID controllers or host bus adapters (HBAs) 3) multiple storage IO channels (SAS or FC, complete path) most importantly,...(read more)

    Read the article

  • CCNet TFS Migration - Dealing with left over folders

    - by Michael Stephenson
    Im currently in the process of migrating our many BizTalk projects from MKS source control to TFS.  While we will be using TFS for work item tracking and source control etc we will be continuing to use Cruise Control for continuous integration although im updating this to CCNet 1.5 at the same time. Ill post a few things as much as a reminder to myself about some of the problems we come across. Problem After the first build of our code the next time a build is triggered an error is encountered by the TFS source control block refreshing the source code. System.IO.IOException: The directory is not empty.    at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive)    at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) System.IO.IOException: The directory is not empty. at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive) at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) Project: Bupa.BPI.Documents Date of build: 2011-01-28 14:54:21 Running time: 00:00:05 Integration Request: Build (ForceBuild) triggered from VMOPBZDEV11 Solution The problem seems to be with a folder called TestLocations which is created by the build process and used along with the file adapter as a way to get messages into BizTalk.  For some reason the source control block when it does a full refresh of the code does not get rid of this folder and then complains thats a problem and fails the build. Interestingly there are other folders created by the build which are deleted fine.  My assumption is that this if something to do with the file adapter polling the directory.  However note that we have not had this problem with other source control blocks in the past. To workaround this I have added a prebuild task to the ccnet.config file to delete this folder before the source control block is executed.  See below for example < prebuild> exec>executable>cmd.exe</executable>buildArgs>/c "if exist "C:\<MyCode>\TestLocations" rd /s /q "C:\<MyCode>\TestLocations""</buildArgs>exec> prebuild> < < < </ </

    Read the article

  • DexFile.class error in eclipse

    - by ninjasense
    I get this weird error everytime I debug in eclipse. It just seemed to appear one day and I was wondering if anyone else was running int the same problem. It does not affect my app in anyway visibly and does not cause a crash but it is an annoyance while debugging. Here is the full error: // Compiled from DexFile.java (version 1.5 : 49.0, super bit) public final class dalvik.system.DexFile { // Method descriptor #8 (Ljava/io/File;)V // Stack: 3, Locals: 2 public DexFile(java.io.File file) throws java.io.IOException; 0 aload_0 [this] 1 invokespecial java.lang.Object() [1] 4 new java.lang.RuntimeException [2] 7 dup 8 ldc <String "Stub!"> [3] 10 invokespecial java.lang.RuntimeException(java.lang.String) [4] 13 athrow Line numbers: [pc: 0, line: 4] Local variable table: [pc: 0, pc: 14] local: this index: 0 type: dalvik.system.DexFile [pc: 0, pc: 14] local: file index: 1 type: java.io.File // Method descriptor #18 (Ljava/lang/String;)V // Stack: 3, Locals: 2 public DexFile(java.lang.String fileName) throws java.io.IOException; 0 aload_0 [this] 1 invokespecial java.lang.Object() [1] 4 new java.lang.RuntimeException [2] 7 dup 8 ldc <String "Stub!"> [3] 10 invokespecial java.lang.RuntimeException(java.lang.String) [4] 13 athrow Line numbers: [pc: 0, line: 5] Local variable table: [pc: 0, pc: 14] local: this index: 0 type: dalvik.system.DexFile [pc: 0, pc: 14] local: fileName index: 1 type: java.lang.String // Method descriptor #22 (Ljava/lang/String;Ljava/lang/String;I)Ldalvik/system/DexFile; // Stack: 3, Locals: 3 public static dalvik.system.DexFile loadDex(java.lang.String sourcePathName, java.lang.String outputPathName, int flags) throws java.io.IOException; 0 new java.lang.RuntimeException [2] 3 dup 4 ldc <String "Stub!"> [3] 6 invokespecial java.lang.RuntimeException(java.lang.String) [4] 9 athrow Line numbers: [pc: 0, line: 6] Local variable table: [pc: 0, pc: 10] local: sourcePathName index: 0 type: java.lang.String [pc: 0, pc: 10] local: outputPathName index: 1 type: java.lang.String [pc: 0, pc: 10] local: flags index: 2 type: int // Method descriptor #28 ()Ljava/lang/String; // Stack: 3, Locals: 1 public java.lang.String getName(); 0 new java.lang.RuntimeException [2] 3 dup 4 ldc <String "Stub!"> [3] 6 invokespecial java.lang.RuntimeException(java.lang.String) [4] 9 athrow Line numbers: [pc: 0, line: 7] Local variable table: [pc: 0, pc: 10] local: this index: 0 type: dalvik.system.DexFile // Method descriptor #30 ()V // Stack: 3, Locals: 1 public void close() throws java.io.IOException; 0 new java.lang.RuntimeException [2] 3 dup 4 ldc <String "Stub!"> [3] 6 invokespecial java.lang.RuntimeException(java.lang.String) [4] 9 athrow Line numbers: [pc: 0, line: 8] Local variable table: [pc: 0, pc: 10] local: this index: 0 type: dalvik.system.DexFile // Method descriptor #32 (Ljava/lang/String;Ljava/lang/ClassLoader;)Ljava/lang/Class; // Stack: 3, Locals: 3 public java.lang.Class loadClass(java.lang.String name, java.lang.ClassLoader loader); 0 new java.lang.RuntimeException [2] 3 dup 4 ldc <String "Stub!"> [3] 6 invokespecial java.lang.RuntimeException(java.lang.String) [4] 9 athrow Line numbers: [pc: 0, line: 9] Local variable table: [pc: 0, pc: 10] local: this index: 0 type: dalvik.system.DexFile [pc: 0, pc: 10] local: name index: 1 type: java.lang.String [pc: 0, pc: 10] local: loader index: 2 type: java.lang.ClassLoader // Method descriptor #37 ()Ljava/util/Enumeration; // Signature: ()Ljava/util/Enumeration<Ljava/lang/String;>; // Stack: 3, Locals: 1 public java.util.Enumeration entries(); 0 new java.lang.RuntimeException [2] 3 dup 4 ldc <String "Stub!"> [3] 6 invokespecial java.lang.RuntimeException(java.lang.String) [4] 9 athrow Line numbers: [pc: 0, line: 10] Local variable table: [pc: 0, pc: 10] local: this index: 0 type: dalvik.system.DexFile // Method descriptor #30 ()V // Stack: 3, Locals: 1 protected void finalize() throws java.io.IOException; 0 new java.lang.RuntimeException [2] 3 dup 4 ldc <String "Stub!"> [3] 6 invokespecial java.lang.RuntimeException(java.lang.String) [4] 9 athrow Line numbers: [pc: 0, line: 11] Local variable table: [pc: 0, pc: 10] local: this index: 0 type: dalvik.system.DexFile // Method descriptor #42 (Ljava/lang/String;)Z public static native boolean isDexOptNeeded(java.lang.String arg0) throws java.io.FileNotFoundException, java.io.IOException; } Thanks

    Read the article

  • How to download a file from a server using Java Socket?

    - by Ada
    Hi, I have an assignment about uploading and downloading a file to a server. I managed to do the uploading part using Java Sockets however I am having a hard time doing the downloading part. I should use Range: for downloading parellel. In my request, I should have the Range: header. But I don't understand how I will receive the file with that HTTP GET request. All the examples I have seen was about uploading a file. I already did it. I can upload .exe, image, .pdf, anything and when I download them back (by my browser), there are no errors. Can you help me with the downloading part? Can you give me an example beacuse I really didn't get it.

    Read the article

  • why does text from socket server erase previously written text?

    - by mix
    This is strange enough I'm not sure how to search for an answer. I have a program in Python that communicates via TCP/IP sockets to a telnet-based server. If I telnet in manually and type commands like this: SET MDI G0 X0 Y0 the server will spit back a line like this: SET MDI ACK Pretty standard stuff. Here's the weird part. If, in my code, I precede my printing of each of these lines with some text, the returned line erases what I'm trying to print before it. So for example, if I write the code so it should look like this: SENT: SET MDI G0 X0 Y0 READ: SET MDI ACK What I get instead is: SENT: SET MDI G0 X0 Y0 SET MDI ACK Now, if I make the "READ: " text a bit longer, I can get a better idea of what's happening. Let's say I change READ: to 12345678901234567890, so that it should read as: 12345678901234567890: SET MDI ACK What I get instead is: SET MDI ACK234567890: So it seems like whatever text I'm getting back from the server is somehow deleting what I'm trying to precede it with. I tried saving all of my saved lines in a list, and then printing them out at the end, but it does exactly the same thing. Any ideas on what's going on, or even on how to debug this? Is there a way to get Python to show me any hidden chars in a string, for example? thx!

    Read the article

  • gethostname() returns accurate hostname, bind() doesn't like it

    - by user2072848
    Doing a python socket tutorial, entire codebase is as follows import socket as so s = so.socket() host = so.gethostname() port = 12345 s.bind((host, port)) s.listen(5) while True: c, addr = s.accept() print 'Got connection from', addr c.send('Thank you for connecting') c.close() and error message: Traceback (most recent call last): File "server.py", line 13, in <module> s.bind((host, port)) File "/Users/solid*name*/anaconda/lib/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) socket.gaierror: [Errno 8] nodename nor servname provided, or not known Printing hostname gives me super*name* Which is, in fact, my computer's hostname.

    Read the article

  • Explaining Explain Plan Notes for Auto DOP

    - by jean-pierre.dijcks
    I've recently gotten some questions around "why do I not see a parallel plan" while Auto DOP is on (I think)...? It is probably worthwhile to quickly go over some of the ways to find out what Auto DOP was thinking. In general, there is no need to go tracing sessions and look under the hood. The thing to start with is to do an explain plan on your statement and to look at the parameter settings on the system. Parameter Settings to Look At First and foremost, make sure that parallel_degree_policy = AUTO. If you have that parameter set to LIMITED you will not have queuing and we will only do the auto magic if your objects are set to default parallel (so no degree specified). Next you want to look at the value of parallel_degree_limit. It is typically set to CPU, which in default settings equates to the Default DOP of the system. If you are testing Auto DOP itself and the impact it has on performance you may want to leave it at this CPU setting. If you are running concurrent statements you may want to give this some more thoughts. See here for more information. In general, do stick with either CPU or with a specific number. For now avoid the IO setting as I've seen some mixed results with that... In 11.2.0.2 you should also check that IO Calibrate has been run. Best to simply do a: SQL> select * from V$IO_CALIBRATION_STATUS; STATUS        CALIBRATION_TIME ------------- ---------------------------------------------------------------- READY         04-JAN-11 10.04.13.104 AM You should see that your IO Calibrate is READY and therefore Auto DOP is ready. In any case, if you did not run the IO Calibrate step you will get the following note in the explain plan: Note -----    - automatic DOP: skipped because of IO calibrate statistics are missing One more note on calibrate_io, if you do not have asynchronous IO enabled you will see:  ERROR at line 1: ORA-56708: Could not find any datafiles with asynchronous i/o capability ORA-06512: at "SYS.DBMS_RMIN", line 463 ORA-06512: at "SYS.DBMS_RESOURCE_MANAGER", line 1296 ORA-06512: at line 7 While this is changed in some fixes to the calibrate procedure, you should really consider switching asynchronous IO on for your data warehouse. Explain Plan Explanation To see the notes that are shown and explained here (and the above little snippet ) you can use a simple explain plan mechanism. There should  be no need to add +parallel etc. explain plan for <statement> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY()); Auto DOP The note structure displaying why Auto DOP did not work (with the exception noted above on IO Calibrate) is like this: Automatic degree of parallelism is disabled: <reason> These are the reason codes: Parameter -  parallel_degree_policy = manual which will not allow Auto DOP to kick in  Hint - One of the following hints are used NOPARALLEL, PARALLEL(1), PARALLEL(MANUAL) Outline - A SQL outline of an older version (before 11.2) is used SQL property restriction - The statement type does not allow for parallel processing Rule-based mode - Instead of the Cost Based Optimizer the system is using the RBO Recursive SQL statement - The statement type does not allow for parallel processing pq disabled/pdml disabled/pddl disabled - For some reason (alter session?) parallelism is disabled Limited mode but no parallel objects referenced - your parallel_degree_policy = LIMITED and no objects in the statement are decorated with the default PARALLEL degree. In most cases all objects have a specific degree in which case Auto DOP will honor that degree. Parallel Degree Limited When Auto DOP does it works you may see the cap you imposed with parallel_degree_limit showing up in the note section of the explain plan: Note -----    - automatic DOP: Computed Degree of Parallelism is 16 because of degree limit This is an obvious indication that your are being capped for this statement. There is one quite interesting one that happens when you are being capped at DOP = 1. First of you get a serial plan and the note changes slightly in that it does not indicate it is being capped (we hope to update the note at some point in time to be more specific). It right now looks like this: Note -----    - automatic DOP: Computed Degree of Parallelism is 1 Dynamic Sampling With 11.2.0.2 you will start seeing another interesting change in parallel plans, and since we are talking about the note section here, I figured we throw this in for good measure. If we deem the parallel (!) statement complex enough, we will enact dynamic sampling on your query. This happens as long as you did not change the default for dynamic sampling on the system. The note looks like this: Note ----- - dynamic sampling used for this statement (level=5)

    Read the article

  • Coherence Warnings in WLS

    - by john.graves(at)oracle.com
    With 11g (10.3.4 WLS), coherence is now built into many applications.  I’ve been noticing errors in my OSB logs like these:####<10/03/2011 10:45:40 AM EST> <Warning> <Coherence> <osb-jeos> <osb_server1> <Logger@324239121 3.6.0.4> <<anonymous>> <> <583c1 0bfdbd326ba:-8c38159:12e9d02c829:-8000-0000000000000003> <1299714340643> <BEA-000000> <Oracle Coherence 3.6.0.4 (member=n/a): Unic astUdpSocket failed to set receive buffer size to 714 packets (1023KB); actual size is 12%, 89 packets (127KB). Consult your OS do cumentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal performanc e.> ####<10/03/2011 10:45:40 AM EST> <Warning> <Coherence> <osb-jeos> <osb_server1> <Logger@324239121 3.6.0.4> <<anonymous>> <> <583c1 0bfdbd326ba:-8c38159:12e9d02c829:-8000-0000000000000003> <1299714340650> <BEA-000000> <Oracle Coherence 3.6.0.4 (member=n/a): Pref erredUnicastUdpSocket failed to set receive buffer size to 1428 packets (1.99MB); actual size is 6%, 89 packets (127KB). Consult y our OS documentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal p erformance.> ####<10/03/2011 10:45:40 AM EST> <Warning> <Coherence> <osb-jeos> <osb_server1> <Logger@324239121 3.6.0.4> <<anonymous>> <> <583c1 0bfdbd326ba:-8c38159:12e9d02c829:-8000-0000000000000003> <1299714340659> <BEA-000000> <Oracle Coherence 3.6.0.4 (member=n/a): Mult icastUdpSocket failed to set receive buffer size to 714 packets (1023KB); actual size is 12%, 89 packets (127KB). Consult your OS documentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal performa nce.> I was able to “fix” this on my ubuntu system by adding the following lines to the /etc/sysctl.conf file:# Setup networking for coherence # maximum receive socket buffer size, default 131071 net.core.rmem_max = 2000000 # maximum send socket buffer size, default 131071 net.core.wmem_max = 1000000 # default receive socket buffer size, default 65535 net.core.rmem_default = 2524287 # default send socket buffer size, default 65535 net.core.wmem_default = 2524287 .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • Design Pattern for Server Emulator

    - by adisembiring
    I wanna build server socket emulator, but I want implement some design pattern there. I will described my case study that I have simplified like these: My Server Socket will always listen client socket. While some request message come from the client socket, the server emulator will response the client through the socket. the response is response code. '00' will describe request message processed successfully, and another response code expect '00' will describe there are some error while processing the message request. IN the server there are some UI, this UI contain check response parameter such as. response code timeout interval While the server want to response the client message, the response code taken from input parameter response form UI check the timeout interval, it will create sleep thread and the interval taken from timeout interval input from UI. I have implement the function, but I create it in one class. I feel it so sucks. Can you suggest me what class / interface that I must create to refactor my code.

    Read the article

  • Server Emulator Design Pattern

    - by adisembiring
    I wanna build server socket emulator, but I want implement some design pattern there. I will described my case study that I have simplified like these: My Server Socket will always listen client socket. While some request message come from the client socket, the server emulator will response the client through the socket. the response is response code. '00' will describe request message processed successfully, and another response code expect '00' will describe there are some error while processing the message request. IN the server there are some UI, this UI contain check response parameter such as. response code timeout interval While the server want to response the client message, the response code taken from input parameter response form UI check the timeout interval, it will create sleep thread and the interval taken from timeout interval input from UI. I have implement the function, but I create it in one class. I feel it so sucks. Can you suggest me what class / interface that I must create to refactor my code.

    Read the article

  • crashing out in a while loop python

    - by Edward
    How to solve this error? i want to pass the values from get_robotxya() and get_ballxya() and use it in a loop but it seems that it will crash after awhile how do i fix this? i want to get the values whithout it crashing out of the while loop import socket import os,sys import time from threading import Thread HOST = '59.191.193.59' PORT = 5555 COORDINATES = [] def connect(): globals()['client_socket'] = socket.socket(socket.AF_INET, socket.SOCK_STREAM) client_socket.connect((HOST,PORT)) def update_coordinates(): connect() screen_width = 0 screen_height = 0 while True: try: client_socket.send("loc\n") data = client_socket.recv(8192) except: connect(); continue; globals()['COORDINATES'] = data.split() if(not(COORDINATES[-1] == "eom" and COORDINATES[0] == "start")): continue if (screen_width != int(COORDINATES[2])): screen_width = int(COORDINATES[2]) screen_height = int(COORDINATES[3]) return def get_ballxy(): update_coordinates() ballx = int(COORDINATES[8]) bally = int(COORDINATES[9]) return ballx,bally def get_robotxya(): update_coordinates() robotx = int(COORDINATES[12]) roboty = int(COORDINATES[13]) angle = int(COORDINATES[14]) return robotx,roboty,angle def print_ballxy(bx,by): print bx print by def print_robotxya(rx,ry,a): print rx print ry print a def activate(): bx,by = get_ballxy() rx,ry,a = get_robotxya() print_ballxy(bx,by) print_robotxya(rx,ry,a) Thread(target=update_coordinates).start() while True: activate() this is the error i get:

    Read the article

  • How to change socket bind port of program? without source code.

    - by hunmr
    Hello everyone, PROBLEM: I have a program dummy.exe on windows. this program will bind to UDP port 5060, after started. but another program also want to bind port 5060. WHAT I HAVE DONE: using windbg to start dummy.exe, and set breakpoint on ws2_32!bind when the breakpoint hit, i changed the parameter (port value) with command ew this dummy.exe will bind to the new port, and worked well. QUESTION: How can i do that easily? write a simple windows debugger? Maybe i can hacking or modify the dummy.exe file, but how to do that? what's your way to achieve this? thanks

    Read the article

  • How can I intercept a Tomcat request at socket level?

    - by Miguel Pardal
    Hi, I'm doing a performance study for a web application framework running on Apache Tomcat 6. I'm trying to measure the time overhead of handling HTTP requests. What I would like to do is: / // just before first request byte is read long t1 = System.nanoTime(); // request is processed... // just after final byte is written to response long t2 = System.nanoTime(); / Then I would compute the total time (t2 - t1). Is there a way to do this? Thanks for your help!

    Read the article

  • How do I see if an established socket is stuck on a server that's expecting input?

    - by Parker
    I have a script that scans ports for open proxy servers. Problem is if it encounters a login program (specifically telnet) then it hangs there forever since it doesn't know what to do and eventually the server closes the connection. The simple solution would be to create a bunch of cases. If telnet, do this. If SSH, do that. If something else, blah blah blah. I'd like an umbrella solution since the script is not a high priority for me. The script, as it is now, is available at http://parkrrr.net/socks/scan.phps On a small scale (the page maybe averages 15 hits/day) it's fine but on a larger scale I'd be worried about a lot of open zombie sockets. Swapping the !$strpos doesn't work since servers can return more information than what you requested (headers, ads, etc). Only accepting a fixed number of bytes (as opposed to appending until EOF, which it does now) from the $fgets also does not seem to work. I am sure this is where it gets stuck: while (!feof($fp)) { $data.=fgets($fp,512); } But what can I do? Any other suggestions/warnings would also be welcomed.

    Read the article

  • What is the effect of final variable declaration in methods?

    - by Finbarr
    Classic example of a simple server: class ThreadPerTaskSocketServer { public static void main(String[] args) throws IOException { ServerSocket socket = new ServerSocket(80); while (true) { final Socket connection = socket.accept(); Runnable task = new Runnable() { public void run() { handleRequest(connection); } }; new Thread(task).start(); } } } Why should the Socket be declared as final? Is it because the new Thread that handles the request could refer back to the socket variable in the method and cause some sort of ConcurrentModificationException?

    Read the article

  • What happens to an instance of ServerSocket blocked inside accept(), when I drop all references to i

    - by Hanno Fietz
    In a multithreaded Java application, I just tracked down a strange-looking bug, realizing that what seemed to be happening was this: one of my objects was storing a reference to an instance of ServerSocket on startup, one thread would, in its main loop in run(), call accept() on the socket while the socket was still waiting for a connection, another thread would try to restart the component under some conditions, the restart process missed the cleanup sequence before it reached the initialization sequence as a result, the reference to the socket was overwritten with a new instance, which then wasn't able to bind() anymore the socket which was blocking inside the accept() wasn't accessible anymore, leaving a complete shutdown and restart of the application as the only way to get rid of it. Which leaves me wondering: with no references left to the ServerSocket instance, what would free the socket for a new connection? At what point would the ServerSocket become garbage collected? In general, what are good practices I can follow to avoid this type of bug?

    Read the article

  • Interview Questions that can be asked on TCP/IP, UDP, Socket Programming ?

    - by Shantanu Gupta
    I am going for an interview day after tomorrow where i will be asked vaious questions related to TCP/IP and UDP. As of now i have prepared theoritical knowledge about it. But now I am looking up for gaining some practicle knowledge related to how it works in a network. What all is going in vaious .NET classes. I want to create a very small application like a chat or something that can make me all these concepts very much clear. Could you please suggest some questions related to TCP/IP that you generally ask or that you might have faced. How communication is going from server to client. Right now I am studying TcpClient, TcpListener and UdpClient Class but I want to implement all of them so as to get aware about its working. Is Chat application a Tcp/IP application ? I would appreciate your help.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >