Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 26/428 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Writing a query to find MAX number in PL/SQL

    - by user2461116
    I am suppose to Write a query that will display the largest number of movies rented by one member and that member's name. Give the output column a meaningful name such as MAXIMUM NUMBER. This is what I have. select max(maximum_movies) from (select count(*)maximum_movies from mm_member join mm_rental on mm_rental.member_id = mm_member.member_id group by first, last); I got the maximum number but the output should be like this. First Last Maximum_movies John Doe 4 But the output is Maximum_movies 4 Any suggestions?

    Read the article

  • Looking for a way to get HTTP Digest Authentication headers from incoming http requests

    - by duncancarroll
    I've been working on a REST implementation with my existing Cake install, and it's looking great except that I want to use HTTP Digest Authentication for all requests (Basic Auth won't cut it). So great, I'll generate a header in the client app (which is not cake) and send it to my cake install. Only problem is, I can't find a method for extracting that Digest from the request... I've looked through the Cake API for something that I can use to get the Digest Header. You'd think that Request Handler would be able to grab it, but I can't find anything resembling that. There must be another method of getting the digest that I am overlooking? In the meantime I'm writing my own regex to parse it out of the Request... once I'm done I'll post it here so no one has to waste as much time as I did hunting for it.

    Read the article

  • Fetch Max from a date column grouped by a particular field

    - by vamyip
    Hi, I have a table similar to this: LogId RefId Entered ================================== 1 1 2010-12-01 2 1 2010-12-04 3 2 2010-12-01 4 2 2010-12-06 5 3 2010-12-01 6 1 2010-12-10 7 3 2010-12-05 8 4 2010-12-01 Here, LogId is unique; For each RefId, there are multiple entries with timestamp. What I want to extract is LogId for each latest RefId. I tried solutions from this link:http://stackoverflow.com/questions/121387/sql-fetch-the-row-which-has-the-max-value-for-a-column. But, it returns multiple rows with same RefId. Can someone help me with this? Thanks Vamyip

    Read the article

  • Firefox - Stashing Requests for Deliberate Resubmission to Django App

    - by Koobz
    I've got an object creation form that's somewhat complicated, it contains a few dynamic formsets etc. I'm trying to ensure that these dynamic formsets are intact if the form runs into an error and returns you to the given page. In cases like this, the refresh button actually works well in re-submitting the request, but I can't rely on it. I'm doing some ad-hoc testing in the browser that I'd like to make a bit more repeatable, and eventually move to a unit test using Django's mock client. Is there an extension, or some convenient method to stash requests for later re-submission. The goal: I resubmit the request, tweak the code, eyeball the results, rinse and repeat. Three days later I can come back to it an try it again to make sure it's still working. The closest thing I can think of in this case is simply recording my activity with Selenium ide and replaying it.

    Read the article

  • Django - Empty session data in ajax requests

    - by ninja123
    Hi guys, I have an ajax view where I want to set a session variable like such: def upload(request, *args, **kwargs): request.session['test'] = 'test' request.session.modified = True print request.session.items() I have another normal view something like this: def advertise(request): print request.session.items() I get these two strings printed to shell: [('test', 'test')] [('_auth_user_backend', 'django.contrib.auth.backends.ModelBackend'), ('_auth_user_id', 26L)] Why is the session data that I set in the ajax view not passing to my regular views? If I set session data in regular view, everything works as fine, but it seems that ajax requests contain empty session data? Anybody dealt with something like this before? Any suggestions are greatly appreciated. Thanks.

    Read the article

  • Foraward SNMP requests from Agentx Master to Agentx Subagent

    - by Nadia
    I am running an agentx master and an agentx subagent on linux. When I run snmpget on a default MIB i.e. sysdescr.0 it returns fine, but when I request for a MIB that was registered through the agentx subagent it timesout. It appears that the master receives the GET request but does not forward on to the agentx subagent. The MIB is registered successfully but when master agentx receives the GET request it saying "Sending 60 bytes to UDP: unknown". It can't find the location to forward to. Am I missing a configuration of some sort on the subagent side? How does the master know who is suppose to receive the requests?

    Read the article

  • Handling multiple async HTTP requests in Silverlight serially

    - by Jeb
    Due to the async nature of http access via WebClient or HttpWebRequest in Silverlight 4, when I want to do multiple http get/posts serially, I find myself writing code that looks like this: doFirstGet(someParams, () => { doSecondGet(someParams, () => { doThirdGet(... } }); Or something similar. I'll end up nesting subsequent calls within callbacks usually implemented using lambdas of some sort. Even if I break things out into Actions or separate methods, it still ends up being hard to read. Does anyone have a clean solution to executing multiple http requests in SL 4 serially? I don't need things to be synchronous, I just want serial execution.

    Read the article

  • select mysql data using MAX

    - by JPro
    I have a testdata like this: DROP TABLE SELECT_PASS; CREATE TABLE SELECT_PASS(ID INT(20),TESTCASE VARCHAR(20),RESULT VARCHAR(20)); INSERT INTO SELECT_PASS VALUES(1,"TC1","PASS"); INSERT INTO SELECT_PASS VALUES(2,"TC2","PASS"); INSERT INTO SELECT_PASS VALUES(3,"TC3","INCONC"); INSERT INTO SELECT_PASS VALUES(4,"TC1","FAIL"); INSERT INTO SELECT_PASS VALUES(5,"TC21","FAIL"); INSERT INTO SELECT_PASS VALUES(6,"TC4","PASS"); INSERT INTO SELECT_PASS VALUES(7,"TC3","PASS"); INSERT INTO SELECT_PASS VALUES(8,"TC2","PASS"); INSERT INTO SELECT_PASS VALUES(9,"TC1","TIMEOUT"); SELECT TESTCASE, MAX(RESULT) FROM SELECT_PASS GROUP BY TESTCASE; The resultset I get is : TC1 TIMEOUT TC2 PASS TC21 FAIL TC3 PASS TC4 PASS Basically I want to see those testcases which never passed. Any way to do it? Thanks.

    Read the article

  • Sequencing ajax requests

    - by Scott Evernden
    I find I sometimes need to iterate some collection and make an ajax call for each element. I want each call to return before moving to the next element so that I don't blast the server with requests - which often leads to other issues. And I don't want to set async to false and freeze the browser. Usually this involves setting up some kind of iterator context that i step thru upon each success callback. I think there must be a cleaner simpler way? Does anyone have a clever design pattern for how to neatly work thru a collection making ajax calls for each item?

    Read the article

  • GAE app.yaml appears to be inconsistently routing requests

    - by kamens
    I have the following in app.yaml: - url: /gae_mini_profiler/static static_dir: gae_mini_profiler/static - url: /gae_mini_profiler/.* script: gae_mini_profiler/main.py - url: .* script: main.py and the following in gae_mini_profiler/main.py: def main(): logging.critical("gae_mini_profiler request!") run_wsgi_app(application) However, when I fire requests to, say, /gae_mini_profiler/request?request=ABC, and repeatedly reload the page, sometimes I will get the proper response (as well as a "gae_mini_profiler request!" log entry, and sometimes I get a blank response and nothing in the App Engine logs other than a 200 with an empty response body. This is completely reproducible, only happens in the live environment, and I'd say ~50% of the refreshes work while 50% do not. This only happens in production. Any ideas?

    Read the article

  • HTTP server that handles requests through IO devices?

    - by StackedCrooked
    This question can probably only be answered for Unix-like systems that follow the "everything is a file" idiom. Would it be hard to create a web server that mounts local devices for handling http traffic? It would enable a program to read raw http requests from /dev/httpin (for example) and write the responses to /dev/httpout. I think this would be nice because it would allow me to create a web server from any programming language that is capable of handling IO streams. I don't really know where to start on this. Any suggestions on how to setup such a system?

    Read the article

  • Prevent Apache from answering invalid requests

    - by nickdnk
    I have an Apache web-server that acts as a web front-end for iPhone and iPad applications that communicate by POST and JSON only. Is there any way to prevent Apache from answering requests that are invalid? I can see my error log is filled with attempts to open files such as /admin.php /index.php etc - files that don't exist on my server. I believe this is possible with IIS, but can you do the same thing with Apache? Basically I want the request to appear timed out unless you post exactly the right content to the right file - or at least if you do not request an existing file. This would make the server appear non-existing to everyone but my iPhone users as all communication is SSL and directories are not really guess-able. I did disable the ServerTokens and all that, but I still get File not found etc. when I access the server requesting a random file, which is what these bots do constantly.

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Postfix log.... spam attempt?

    - by luri
    I have some weird entries in my mail.log. What I'd like to ask is if postfix is avoiding correctly (according with the main.cf attached below) what seems to be relay attempts, presumably for spamming, or if I can enhance it's security somehow. Feb 2 11:53:25 MYSERVER postfix/smtpd[9094]: connect from catv-80-99-46-143.catv.broadband.hu[80.99.46.143] Feb 2 11:53:25 MYSERVER postfix/smtpd[9094]: warning: non-SMTP command from catv-80-99-46-143.catv.broadband.hu[80.99.46.143]: GET / HTTP/1.1 Feb 2 11:53:25 MYSERVER postfix/smtpd[9094]: disconnect from catv-80-99-46-143.catv.broadband.hu[80.99.46.143] Feb 2 11:56:45 MYSERVER postfix/anvil[9097]: statistics: max connection rate 1/60s for (smtp:80.99.46.143) at Feb 2 11:53:25 Feb 2 11:56:45 MYSERVER postfix/anvil[9097]: statistics: max connection count 1 for (smtp:80.99.46.143) at Feb 2 11:53:25 Feb 2 11:56:45 MYSERVER postfix/anvil[9097]: statistics: max cache size 1 at Feb 2 11:53:25 Feb 2 12:09:19 MYSERVER postfix/smtpd[9302]: connect from vs148181.vserver.de[62.75.148.181] Feb 2 12:09:19 MYSERVER postfix/smtpd[9302]: warning: non-SMTP command from vs148181.vserver.de[62.75.148.181]: GET / HTTP/1.1 Feb 2 12:09:19 MYSERVER postfix/smtpd[9302]: disconnect from vs148181.vserver.de[62.75.148.181] Feb 2 12:12:39 MYSERVER postfix/anvil[9304]: statistics: max connection rate 1/60s for (smtp:62.75.148.181) at Feb 2 12:09:19 Feb 2 12:12:39 MYSERVER postfix/anvil[9304]: statistics: max connection count 1 for (smtp:62.75.148.181) at Feb 2 12:09:19 Feb 2 12:12:39 MYSERVER postfix/anvil[9304]: statistics: max cache size 1 at Feb 2 12:09:19 Feb 2 14:17:02 MYSERVER postfix/smtpd[10847]: connect from unknown[202.46.129.123] Feb 2 14:17:02 MYSERVER postfix/smtpd[10847]: warning: non-SMTP command from unknown[202.46.129.123]: GET / HTTP/1.1 Feb 2 14:17:02 MYSERVER postfix/smtpd[10847]: disconnect from unknown[202.46.129.123] Feb 2 14:20:22 MYSERVER postfix/anvil[10853]: statistics: max connection rate 1/60s for (smtp:202.46.129.123) at Feb 2 14:17:02 Feb 2 14:20:22 MYSERVER postfix/anvil[10853]: statistics: max connection count 1 for (smtp:202.46.129.123) at Feb 2 14:17:02 Feb 2 14:20:22 MYSERVER postfix/anvil[10853]: statistics: max cache size 1 at Feb 2 14:17:02 Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: warning: 95.110.224.230: hostname host230-224-110-95.serverdedicati.aruba.it verification failed: Name or service not known Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: connect from unknown[95.110.224.230] Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: lost connection after CONNECT from unknown[95.110.224.230] Feb 2 20:57:33 MYSERVER postfix/smtpd[18452]: disconnect from unknown[95.110.224.230] Feb 2 21:00:53 MYSERVER postfix/anvil[18455]: statistics: max connection rate 1/60s for (smtp:95.110.224.230) at Feb 2 20:57:33 Feb 2 21:00:53 MYSERVER postfix/anvil[18455]: statistics: max connection count 1 for (smtp:95.110.224.230) at Feb 2 20:57:33 Feb 2 21:00:53 MYSERVER postfix/anvil[18455]: statistics: max cache size 1 at Feb 2 20:57:33 Feb 2 21:13:44 MYSERVER pop3d: Connection, ip=[::ffff:219.94.190.222] Feb 2 21:13:44 MYSERVER pop3d: LOGIN FAILED, user=admin, ip=[::ffff:219.94.190.222] Feb 2 21:13:50 MYSERVER pop3d: LOGIN FAILED, user=test, ip=[::ffff:219.94.190.222] Feb 2 21:13:56 MYSERVER pop3d: LOGIN FAILED, user=danny, ip=[::ffff:219.94.190.222] Feb 2 21:14:01 MYSERVER pop3d: LOGIN FAILED, user=sharon, ip=[::ffff:219.94.190.222] Feb 2 21:14:07 MYSERVER pop3d: LOGIN FAILED, user=aron, ip=[::ffff:219.94.190.222] Feb 2 21:14:12 MYSERVER pop3d: LOGIN FAILED, user=alex, ip=[::ffff:219.94.190.222] Feb 2 21:14:18 MYSERVER pop3d: LOGIN FAILED, user=brett, ip=[::ffff:219.94.190.222] Feb 2 21:14:24 MYSERVER pop3d: LOGIN FAILED, user=mike, ip=[::ffff:219.94.190.222] Feb 2 21:14:29 MYSERVER pop3d: LOGIN FAILED, user=alan, ip=[::ffff:219.94.190.222] Feb 2 21:14:35 MYSERVER pop3d: LOGIN FAILED, user=info, ip=[::ffff:219.94.190.222] Feb 2 21:14:41 MYSERVER pop3d: LOGIN FAILED, user=shop, ip=[::ffff:219.94.190.222] Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: warning: 71.6.142.196: hostname db4142196.aspadmin.net verification failed: Name or service not known Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: connect from unknown[71.6.142.196] Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: lost connection after CONNECT from unknown[71.6.142.196] Feb 3 06:49:29 MYSERVER postfix/smtpd[25834]: disconnect from unknown[71.6.142.196] Feb 3 06:52:49 MYSERVER postfix/anvil[25837]: statistics: max connection rate 1/60s for (smtp:71.6.142.196) at Feb 3 06:49:29 Feb 3 06:52:49 MYSERVER postfix/anvil[25837]: statistics: max connection count 1 for (smtp:71.6.142.196) at Feb 3 06:49:29 Feb 3 06:52:49 MYSERVER postfix/anvil[25837]: statistics: max cache size 1 at Feb 3 06:49:29 I have Postfix 2.7.1-1 running on Ubuntu 10.10. This is my (modified por privacy) main.cf: smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_key_file = /etc/ssl/private/smtpd.key myhostname = mymailserver.org alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mymailserver.org, MYSERVER, localhost relayhost = mynetworks = 127.0.0.0/8, 192.168.1.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all home_mailbox = Maildir/ smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination mailbox_command = smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_tls_security_level = may smtpd_tls_auth_only = no smtp_tls_note_starttls_offer = yes smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom smtp_tls_security_level = may

    Read the article

  • Redirect some URL requests to CloudFront and the rest direct to the normal server?

    - by indiehacker
    Say I have two types of URL requests that must be handled by my REST API: http://query.restapi.com/image.png?apikey=abc123 http://query.restapi.com/2.0/<apiKey>/resource.json?from=umi.us_census00.state_geometry Is it possible to redirect only some URL requests for static images (ie., regex: *.png?.*) to take advantage of CloudFront's caching and have the rest of the requests go directly to the normal EC2 server (or at least take a speedier indirect route to the normal EC2 server?). Perhaps the added request time for the misses to CloudFront is irrelevant to worry about? Or perhaps my situation is not best to use for CloudFront? I understand I will need to make DNS change where the current URL requests having http://query.restapi.com/some.png?apikey=0123 get redirected to http://d1234.cloudfront.net/some.png, but I am hoping there is some way for just redirecting static .png requests to take advantage of CloudFront?

    Read the article

  • Ajax call from a form rendered as Ajax response (jQuery + Grails: chaining ajax requests)

    - by bsreekanth
    Hello, I was expecting the below scenario common, but couldn't find much help online. I have a form loaded through Ajax (say, create entity form). It is loaded through a button click (load) event $("#bt-create").click(function(){ $ ('#pid').load('/controller/vehicleModel/create3'); return false; }); the response (a form) is written in to the pid element. The name and id of the form is ajax-form, and the submit event is attached to an ajax post request $(function() { $("#ajax-form").submit(function(){ // do something... var url = "/app/controller/save" $.post(url, $(this).serialize(), function(data) { alert( data ) ; /// alert data from server }); I could make the above ajax operations individually. That is the ajax post operation succeeds if it calls from a static html file. But if I chain the requests (after completing the first), so that it calls from the output form generated by the first request, nothing happens. I could see the post method is called through firebug. Is there a better way to handle above flow? One more interesting thing I noticed. As you could see, I use grails as my platform. If I keep the javascripts in the main.gsp (master layout), the submit event would not register as the breakpoint is not hit in firebug. But, if I define the javascript in the template file (which renders the form above), the breakpoint is hit, but as I explained, the action is not called at the controller. I changes the javascript to the head section but same result. any help greatly appreciated. thanks, Babu.

    Read the article

  • how to do asynchronous http requests with epoll and python 3.1

    - by flow
    there is an interesting page http://scotdoyle.com/python-epoll-howto.html about how to do asnchronous / non-blocking / AIO http serving in python 3. there is the tornado web server which does include a non-blocking http client. i have managed to port parts of the server to python 3.1, but the implementation of the client requires pyCurl and seems to have problems (with one participant stating how ‘Libcurl is such a pain in the neck’, and looking at the incredibly ugly pyCurl page i doubt pyCurl will arrive in py3+ any time soon). now that epoll is available in the standard library, it should be possible to do asynchronous http requests out of the box with python. i really do not want to use asyncore or whatnot; epoll has a reputation for being the ideal tool for the task, and it is part of the python distribution, so using anything but epoll for non-blocking http is highly counterintuitive (prove me wrong if you feel like it). oh, and i feel threading is horrible. no threading. i use stackless. people further interested in the topic of asynchronous http should not miss out on this talk by peter portante at PyCon2010; also of interest is the keynote, where speaker antonio rodriguez at one point emphasizes the importance of having up-to-date web technology libraries right in the standard library.

    Read the article

  • jQuery 1.4 Dynamically created images aspect ratio incorrect in IE8 and max-width

    - by Chris
    After upgrading to jQuery 1.4, when I try to add an image to a page dynamically using jQuery 1.4 in IE8, the image will lose the correct aspect ratio. This appears to affect only IE8 (IE7 and Firefox work fine) with jQuery 1.4 (1.3.2 works fine). Including the jQuery compatability file does not fix the problem. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" language="javascript" type="text/javascript"></script> <!-- Switching to 1.3.2 fixes the problem --> <!--<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" language="javascript" type="text/javascript"></script>--> <script type="text/javascript"> $(document).ready(function() { var dynImg = $('<img></img>').attr('src', 'http://www.google.com/intl/en_ALL/images/logo.gif'); $('body').append(dynImg); }); </script> <style type="text/css"> img { max-width: 5em; } </style> </head> <body></body></html>

    Read the article

  • Delphi: Why does IdHTTP.ConnectTimeout make requests slower?

    - by K.Sandell
    I discovered that when setting the ConnectTimeoout property for a TIdHTTP component, it makes the requests (GET and POST) become about 120ms slower? Why is this, and can I avoid/bypass this somehow? Env: D2010 with shipped Indy components, all updates installed for D2010. OS is WinXP (32bit) SP3 with most patches... My timing routine is: Procedure DoGet; Var Freq,T1,T2 : Int64; Cli : TIdHTTP; S : String; begin QueryPerformanceFrequency(Freq); Try QueryPerformanceCounter(T1); Cli := TIdHTTP.Create( NIL ); Cli.ConnectTimeout := 1000; // without this we get < 15ms!! S := Cli.Get('http://127.0.0.1/empty_page.php'); Finally FreeAndNil(Cli); QueryPerformanceCounter(T2); End; Memo1.Lines.Add('Time = '+FormatFloat('0.000',(T2-T1)/Freq) ); End; With the ConnectTimeout set in code I get avg. times of 130-140ms, without it's about 5-15ms ...

    Read the article

  • jQuery ajax form submit - multiple post/get requests caused when validation fails

    - by kenny99
    Hi, I have a problem where if a form submission (set up to submit via AJAX) fails validation, the next time the form is submitted, it doubles the number of post requests - which is definitely not what I want to happen. I'm using the jQuery ValidationEngine plugin to submit forms and bind validation messages to my fields. This is my code below. I think my problem may lie in the fact that I'm retrieving the form action attribute via $('.form_container form').attr('action') - I had to do it this way as using $(this).attr was picking up the very first loaded form's action attr - however, when i was loading new forms into the page it wouldn't pick up the new form's action correctly, until i removed the $(this) selector and referenced it using the class. Can anyone see what I might be doing wrong here? (Note I have 2 similar form handlers which are loaded on domready, that could also be an issue) $('input#eligibility').live("click", function(){ $(".form_container form").validationEngine({ ajaxSubmit: true, ajaxSubmitFile: $('.form_container form').attr('action'), success : function() { var url = $('input.next').attr('rel'); ajaxFormStage(url); }, failure : function() { //unique stuff for this form } }); }); //generic form handler - all form submissions not flagged with the #eligibility id $('input.next:not(#eligibility)').live("click", function(){ $(".form_container form").validationEngine({ ajaxSubmit: true, ajaxSubmitFile: $('.form_container form').attr('action'), success : function() { var url = $('input.next').attr('rel'); ajaxFormStage(url); }, failure : function() { } }); });

    Read the article

  • CSRF Protection in AJAX Requests using MVC2

    - by mnemosyn
    The page I'm building depends heavily on AJAX. Basically, there is just one "page" and every data transfer is handled via AJAX. Since overoptimistic caching on the browser side leads to strange problems (data not reloaded), I have to perform all requests (also reads) using POST - that forces a reload. Now I want to prevent the page against CSRF. With form submission, using Html.AntiForgeryToken() works neatly, but in AJAX-request, I guess I will have to append the token manually? Is there anything out-of-the box available? My current attempt looks like this: I'd love to reuse the existing magic. However, HtmlHelper.GetAntiForgeryTokenAndSetCookie is private and I don't want to hack around in MVC. The other option is to write an extension like public static string PlainAntiForgeryToken(this HtmlHelper helper) { // extract the actual field value from the hidden input return helper.AntiForgeryToken().DoSomeHackyStringActions(); } which is somewhat hacky and leaves the bigger problem unsolved: How to verify that token? The default verification implementation is internal and hard-coded against using form fields. I tried to write a slightly modified ValidateAntiForgeryTokenAttribute, but it uses an AntiForgeryDataSerializer which is private and I really didn't want to copy that, too. At this point it seems to be easier to come up with a homegrown solution, but that is really duplicate code. Any suggestions how to do this the smart way? Am I missing something completely obvious?

    Read the article

  • WCF and streaming requests and responses

    - by Cheeso
    Is it correct that in WCF, I cannot have a service write to a stream that is received by the client? My understanding is that streaming is supported in WCF for requests, responses, or both. Is it true that in all cases, the receiver of the stream must invoke Read ? I would like to support a scenario where the receiver of the stream can Write on it. Is this supported? Let me show it this way. The simplest example of Streaming in WCF is the service returning a FileStream to a client. This is a streamed response. The server code is like this: [ServiceContract] public interface IStreamService { [OperationContract] Stream GetData(string fileName); } public class StreamService : IStreamService { public Stream GetData(string filename) { FileStream fs = new FileStream(filename, FileMode.Open) return fs; } } And the client code is like this: StreamDemo.StreamServiceClient client = new WcfStreamDemoClient.StreamDemo.StreamServiceClient(); Stream str = client.GetData(@"c:\path\to\myfile.dat"); do { b = str.ReadByte(); //read next byte from stream ... } while (b != -1); (example taken from http://blog.joachim.at/?p=33) Clear, right? The server returns the Stream to the client, and the client invokes Read on it. Is it possible for the client to provide a Stream, and the server to invoke Write on it? In other words, rather than a pull model - where the client pulls data from the server - it is a push model, where the client provides the "sink" stream and the server writes into it. Is this possible in WCF, and if so, how? What are the config settings required for the binding, interface, etc? The analogy is the Response.OutputStream from an ASP.NET request. In ASPNET, any page can invoke Write on the output stream, and the content is received by the client. Can I do something similar in WCF? Thanks.

    Read the article

  • Java HTTP Requests Buffer size

    - by behrk2
    Hello, I have an HTTP Request Dispatcher class that works most of the time, but I had noticed that it "stalls" when receiving larger requests. After looking into the problem, I thought that perhaps I wasn't allocating enough bytes to the buffer. Before, I was doing: byte[] buffer = new byte[10000]; After changing it to 20000, it seems to have stopped stalling: String contentType = connection.getHeaderField("Content-type"); ByteArrayOutputStream baos = new ByteArrayOutputStream(); InputStream responseData = connection.openInputStream(); byte[] buffer = new byte[20000]; int bytesRead = responseData.read(buffer); while (bytesRead > 0) { baos.write(buffer, 0, bytesRead); bytesRead = responseData.read(buffer); } baos.close(); connection.close(); Am I doing this right? Is there anyway that I can dynamically set the number of bytes for the buffer based on the size of the request? Thanks...

    Read the article

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • Ordering by a max or a min from another table

    - by Paul Tomblin
    I have a table that consists of a unique id, and a few other attributes. It holds "schedules". Then I have another table that holds a list of all the times each schedule has or will "fire". This isn't the exact schema, but it's close: create table schedule ( id varchar(40) primary key, attr1 int, attr2 varchar(20) ); create table schedule_times ( id varchar(40) foreign key schedule(id), fire_date date ); I want to query the schedule table, getting the attributes and the next and previous fire_dates, in Java, sometimes ordering on one of the attributes, but sometimes ordering on either previous fire_date or the next fire_date. Ordering by the attributes is easy, I just stick an "order by" into the string while I'm building my prepared statement. I'm not even sure how to go about selecting the last fire_date and the next one in a single query - I know that I can find the next fire_date for a given id by doing a SELECT min(fire_date) FROM schedule_times WHERE id = ? AND fire_date > sysdate; and the similar thing for previous fire_date using max() and fire_date < sysdate. I'm just drawing a blank on how to incorporate that into a single select from the schedule so I can get both next and previous fire_date in one shot, and also how to order by either of those attributes.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >