Search Results

Search found 95301 results on 3813 pages for 'server client'.

Page 373/3813 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Mono WCF NetTcp service takes only one client at a time

    - by vene
    While trying to build a client-server WCF application in Mono we ran into some issues. Reducing it to just a bare example we found that the service only accepts one client at a time. If another client attempts to connect, it hangs until the first one disconnects. Simply changing to BasicHttpBinding fixes it but we need NetTcpBinding for duplex communication. Also the problem does not appear if compiled under MS .NET. EDIT: I doubt (and hope not) that Mono doesn't support what I'm trying to do. Mono code usually throws NotImplementedExceptions in such cases as far as I noticed. I am using Mono v2.6.4 This is how the service is opened in our basic scenario: public static void Main (string[] args) { var binding = new NetTcpBinding (); binding.Security.Mode = SecurityMode.None; var address = new Uri ("net.tcp://localhost:8080"); var host = new ServiceHost (typeof(Hello)); host.AddServiceEndpoint (typeof(IHello), binding, address); ServiceThrottlingBehavior behavior = new ServiceThrottlingBehavior () { MaxConcurrentCalls = 100, MaxConcurrentSessions = 100, MaxConcurrentInstances = 100 }; host.Description.Behaviors.Add (behavior); host.Open (); Console.ReadLine (); host.Close (); } The client channel is obtained like this: var binding = new NetTcpBinding (); binding.Security.Mode = SecurityMode.None; var address = new EndpointAddress ("net.tcp://localhost:8080/"); var client = new ChannelFactory<IHello> (binding, address).CreateChannel (); As far as I know this is a Simplex connection, isn't it? The contract is simply: [ServiceContract] public interface IHello { [OperationContract] string Greet (string name); } Service implementation has no ServiceModel tags or attributes. I'll update with details as required.

    Read the article

  • flex/actionscript client entity state refresh on JPA update using Pimento EntityManager

    - by Chris
    My Flex application uses a client-side pimento EntityManager which fetches quite a few objects and associations. It does this by forcing eager fetching of particular association ends in the form of fetch plans. I would like to update the client whenever a change has been made to an entity existing in the EntityManager's cache. Is it possible to update the state of the changed entity ONLY, including updating which entities are associated, without resetting the state of these associated entities? I have setup an EntityListener with a JPA post-update method that notifies clients when a persisted entity has been updated. I want this to trigger a refresh for the modified client-side entity, but calling EntityManager.refresh(entity) resets all lazy associations to proxies. Initializing these proxies resets the associated entities, even if they were loaded previously. I'm looking for an efficient way to keep the client's state in synch with the server's state, at least with respect to the entities that have already been retrieved by the initial load.

    Read the article

  • Communication between java server and matlab client

    - by user272587
    I'd like to establish a server(Java)/client (Matlab) communication using socket. They can send messages to each other. An example shows how to do this in Java server and Java client, http://java.sun.com/docs/books/tutorial/networking/sockets/clientServer.html. When I try to rewrite the client part in Matlab, I only can get the first message that the Java server sends and display it in the Matlab command window. When I type a message in the Matlab command window, I can't pass it to the Java Server. Jave code: kkSocket = new Socket("localhost", 3434); Matlab equivalent: kkSocket = Socket('localhost', 3434); Java code for client: out = new PrintWriter(kkSocket.getOutputStream(), true); in = new BufferedReader(new InputStreamReader(kkSocket.getInputStream())); What would be a Matlab equivalent for this? Thanks in advance.

    Read the article

  • Case Order by using Null

    - by molgan
    Hello I have the following test-code: CREATE TABLE #Foo (Foo int) INSERT INTO #Foo SELECT 4 INSERT INTO #Foo SELECT NULL INSERT INTO #Foo SELECT 2 INSERT INTO #Foo SELECT 5 INSERT INTO #Foo SELECT 1 SELECT * FROM #Foo ORDER BY CASE WHEN Foo IS NULL THEN Foo DESC ELSE Foo END DROP TABLE #Foo I'm trying to produce the following output: 1 2 3 4 5 NULL "If null then put it last" How is that done using Sql 2005 /M

    Read the article

  • CONTAINSTABLE with wildcard works different in SQLServer 2005 and SQLServer 2008?

    - by musuk
    I have two same databases one on SQLServer 2005 and one on SqlServer 2008, it have same SQL_Latin1_General_CP1_CI_AS Collation, and full text search catalogs have the same settings. These two databases contains table with same data, NTEXT string: "...kræve en forklaring fra miljøminister Connie Hedegaard.." My problem is: CONTAINSTABLE on SQLServer 2008 finds nothing if query is: select * from ContainsTable(SearchIndex_7, Content, '"miljø*"') ct but SQLServer 2005 works perfectly and finds necessary record. SQLServer 2008 finds necessary record if query is: select * from ContainsTable(SearchIndex_7, Content, '"milj*"') ct or select * from ContainsTable(SearchIndex_7, Content, '"miljøminister"') What can be reason for so strange behavior?

    Read the article

  • Exporting query results to a file on the fly

    - by ercan
    Hi all, I need to export the results of a query to a csv file in an FTP folder. Is it possible to achieve this within a stored procedure? If yes, comes yet another constraint: can I achieve this without sysadmin privileges, aka without using xp_cmdshell + BCP utility? If no to 2., does the caller have to have sysadmin privileges or would it suffice if the SP owner has sysadmin privileges? Here are some more details to the problem: The SP must export and transfer the file on the fly and raise error if something went wrong. The caller must get a response immediately, i.e. in case of no error, he can assume that the results are successfully transferred to the folder. Therefore, a DTS/SSIS job that runs every N minutes is not an option. I know the problem smells like I will have to do this at application level, but I would be more than happy if all those stuff could be done from T-SQL.

    Read the article

  • Bug: files uploaded via desktop or web client have hidden tag when listed via API

    - by Jon Webb
    Files uploaded to Google Drive sometimes incorrectly have a hidden tag when listed via the Document List v3 REST API: <category scheme='http://schemas.google.com/g/2005/labels' term='http://schemas.google.com/g/2005/labels#hidden' label='hidden'/> This happens if: a subfolder is created via the Google Drive desktop client and files are copied in, or a folder is uploaded via the Google Drive web client. The folder does not have the hidden tag, but the files that were uploaded do. The files do not have this tag if: they are individually uploaded via the Google Drive web client to the subfolder, or they are uploaded via the REST API to the subfolder, or they are uploaded via the desktop client to the My Drive root. The files and folders show up in Google Drive whether they have the hidden tag or not. We're using the API with the following scope: https://docs.google.com/feeds/ https://spreadsheets.google.com/feeds/ https://docs.googleusercontent.com/ I have verified and can recreate this with the OAuth 2.0 playground. Google Drive desktop client version 1.3.3209.2600 on Win7 32-bit I guess these must be bugs in the API...

    Read the article

  • Override Linq-to-Sql Datetime.ToString() Default Convert Values

    - by snmcdonald
    Is it possible to override the default CONVERT style? I would like the default CONVERT function to always return ISO8601 style 126. Steps To Reproduce: DROP TABLE DATES; CREATE TABLE DATES ( ID INT IDENTITY(1,1) PRIMARY KEY, MYDATE DATETIME DEFAULT(GETUTCDATE()) ); INSERT INTO DATES DEFAULT VALUES; INSERT INTO DATES DEFAULT VALUES; INSERT INTO DATES DEFAULT VALUES; INSERT INTO DATES DEFAULT VALUES; SELECT CONVERT(NVARCHAR,MYDATE) AS CONVERTED, CONVERT(NVARCHAR(4000),MYDATE,126) AS ISO, MYDATE FROM DATES WHERE MYDATE LIKE'Feb%' Output: CONVERTED ISO MYDATE --------------------------- ---------------------------- ----------------------- Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Linq-to-Sql calls CONVERT(NVARCHAR,@p) when I cast ToString(). However, I am displaying all my data in the ISO8601 format. I would like to override the database default if possible to CONVERT(NVARCHAR,@p,126). I am using Dynamic Linq-to-Sql as demoed by ScottGu to process my data. PropertyInfo piField = typeof(T).GetProperty(rule.field); if (piField != null) { Type typeField = piField.PropertyType; if (typeField.IsGenericType && typeField.GetGenericTypeDefinition().Equals(typeof(Nullable<>))) { filter = filter .Select(x => x) .Where(string.Format("{0} != null", rule.field)) .Where(string.Format("{0}.Value.ToString().Contains(\"{1}\")", rule.field, rule.data)); } else { filter = filter .Select(x => x) .Where(string.Format("{0} != null", rule.field)) .Where(string.Format("{0}.ToString().Contains(\"{1}\")", rule.field, rule.data)); } } I was hoping my property would convert the expression from CONVERT(NVARCHAR,@p) to CONVERT(NVARCHAR,@p,126), however I get a NotSupportedException: ... has no supported translation to SQL. public string IsoDate { get { if (SUBMIT_DATE.HasValue) { return SUBMIT_DATE.Value.ToString("o"); } else { return string.Empty; } } }

    Read the article

  • Firefox will not remember local site cookie

    - by Campo
    This is a weird one. We have a production server (Server 2008) and two staging servers (Server 2008 and Server 2003) I have sites on all of these. They all use cookies. On the Production server when browsing to our site www.supernovainteractive.com there is a cookie that detects when you visted the site and it will not refresh the logo animation (top left hand side) on clicking to another page. This works for all browsers on the production server. I’m not sure what’s going on but for some reason cookies are not working on one site in the 2008 staging server only. This is when browsing using Firefox (3.6.3) they work fine on all other browsers (IE, Chrome, Safari, Opera) In addition, the 2003 staging server works fine. You can test on the Supernova Interactive site by noticing the logo in the top left corner. It uses a cookie to detect if you’ve already seen the animation. Once you’ve seen it once, it doesn’t animate again until tomorrow. Currently, it’s animating every time. I have opened an outside facing port so others can see the issue. Http://exchange.supernova.com:10009 Any ideas on this one? Firewalls are off on the server. Notice you do not get a cookie from Exchange.supernova.com.

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • Is there a way to only backup a SQL 2005 database structure fully, but only the data in a certain se

    - by TheSoftwareJedi
    I have several schemas in my database, and the largest one ("large" meaning disk space consumed) is my "web" schema which is a denormalized copy of data in the operational schemas. This denormalized data is able to be reconstructed at anytime, and is merely there for extremely fast read purposes. Since the data is redundant, and VERY large - I'd like to exclude it from being backed up. I already have stored procedures that can regenerate all of the data in that schema in a couple of hours - for use in the event of a failure. I assume I can split the tables in this schema out to another data file or such (ideally even on another drive for faster reads), but is there a way to never have that data file backup, yet still in the event of a failure its structure could be restored (and other DDL stuff like procs, views, etc)? Somewhat related, can I also have these tables not do transaction logging, if I go to "Full" backup mode for the rest of the database?

    Read the article

  • InvalidOperationException when executing SqlCommand with transaction

    - by Serhat Özgel
    I have this code, running parallel in two separate threads. It works fine for a few times, but at some random point it throws InvalidOperationException: The transaction is either not associated with the current connection or has been completed. At the point of exception, I am looking inside the transaction with visual studio and verify its connection is set normally. Also command.Transaction._internalTransaction. _transactionState is set to Active and IsZombied property is set to false. This is a test application and I am using Thread.Sleep for creating longer transactions and causing overlaps. Why may the exception being thrown and what can I do about it? IDbCommand command = new SqlCommand("Select * From INFO"); IDbConnection connection = new SqlConnection(connectionString); command.Connection = connection; IDbTransaction transaction = null; try { connection.Open(); transaction = connection.BeginTransaction(); command.Transaction = transaction; command.ExecuteNonQuery(); // Sometimes throws exception Thread.Sleep(forawhile); // For overlapping transactions running in parallel transaction.Commit(); } catch (ApplicationException exception) { if (transaction != null) { transaction.Rollback(); } } finally { connection.Close(); }

    Read the article

  • Backing up an online database

    - by Veejay
    I havea 70MB db of my website which is hosted with a provider. I am able to access my db using SSMS 2008 remotely. On a running website, which is the best way I can back up the db locally on machine Thanks

    Read the article

  • .NET 3.5 Client Framework redistributable ?

    - by Holli
    It's nice from Microsoft to offer things like the Client Framework for anybody who doesn't need the complete framework to run an application. But for about an hour I a searching the web for a redistributable version of this package. I can't find anything. It looks like Client Framework is only possible for Click-Once deployment or a bootstrapper who will download the framework. These are no options for me. On this page http://www.microsoft.com/downloads/details.aspx?familyid=992cffcb-f8ce-41d9-8bd6-31f3e216285c&displaylang=en I found a package that will contain both "The download package contains the .NET Framework Client Profile and the full .NET Framework 3.5 Service Pack 1." But again this is not what I need and it's even bigger that the singleFramework. Is there anything like the .NET 3.5 Client Framework redistributable ?

    Read the article

  • How to control order of assignment for new identity column in mssql ?

    - by alpav
    I have a table with CreateDate datetime field default(getdate()) that does not have any identity column. I would like to add identity(1,1) field that would reflect same order of existing records as CreateDate field (order by would give same results). How can I do that ? I guess if I create clustered key on CreateDate field and then add identity column it will work (not sure if it's guaranteed), is there a good/better way ? I am interested in sql 2005, but I guess the answer will be the same for sql 2008, sql 2000.

    Read the article

  • service broker message process order

    - by Blootac
    Everywhere I read says that messages handled by the service broker are processed in the order that they arrive, and yet if you create a table, message type, contract, service etc , and on activation have a stored proc that waits for 2 seconds and inserts the msg into a table, set the max queue readers to 5 or 10, and send 20 odd messages I can see in the table that they are inserted out of order even though when I insert them into the queue and look at the contents of the queue I can see that the messages are all in the right order. Is it due to the delay waitfor waiting for the nearest second and each thread having different subsecond times and then fighting for a lock or something? The reason i've got a delay in there is to simulate delays with joins etc Thanks demo code: --create the table and service broker CREATE TABLE test ( id int identity(1,1), contents varchar(100) ) CREATE MESSAGE TYPE test CREATE CONTRACT mycontract ( test sent by initiator ) GO CREATE PROCEDURE dostuff AS BEGIN DECLARE @msg varchar(100); RECEIVE TOP (1) @msg = message_body FROM myQueue IF @msg IS NOT NULL BEGIN WAITFOR DELAY '00:00:02' INSERT INTO test(contents)values(@msg) END END GO ALTER QUEUE myQueue WITH STATUS = ON, ACTIVATION ( STATUS = ON, PROCEDURE_NAME = dostuff, MAX_QUEUE_READERS = 10, EXECUTE AS SELF ) create service senderService on queue myQueue ( mycontract ) create service receiverService on queue myQueue ( mycontract ) GO --********************************************************** --now insert lots of messages to the queue DECLARE @dialog_handle uniqueidentifier BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>1</test>'); BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>2</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>3</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>4</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>5</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>6</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>7</test>')

    Read the article

  • Power error handling inside of sql function

    - by user172062
    I have a power function call inside of a sql function. What is the correct way to handle overflow and underflow conditions since I cannot use a Try Catch inside of a function. I am also trying to avoid modifying the ARITHABORT, ANSI_WARNINGS, and ARITHIGNORE settings in the calling code. GO CREATE FUNCTION TestPow() RETURNS DECIMAL(30, 14) AS BEGIN DECLARE @result FLOAT SET @result = POWER(10.0, 300) RETURN @result END GO SELECT dbo.TestPow()

    Read the article

  • Design SQL Query for following case

    - by rs
    Consider tables Table1 id, name 1 xyz 2 abc 3 pqr Table2 id title 1 Mg1 2 Mg2 3 SG1 Table3 Tb1_id tb2_id count 1 1 3 1 2 3 1 3 4 2 2 1 3 2 2 3 3 2 I want to do query to give result like id title 1 MG1 2 MG2 3 Two or More Title MG1 has higher preference if MG1 and count = 1 then it is given as MG1 title , for others corresponding title is used and for count 1 as two or more

    Read the article

  • What is happening in this T-SQL code? (Concatenting the results of a SELECT statement)

    - by Ben McCormack
    I'm just starting to learn T-SQL and could use some help in understanding what's going on in a particular block of code. I modified some code in an answer I received in a previous question, and here is the code in question: DECLARE @column_list AS varchar(max) SELECT @column_list = COALESCE(@column_list, ',') + 'SUM(Case When Sku2=' + CONVERT(varchar, Sku2) + ' Then Quantity Else 0 End) As [' + CONVERT(varchar, Sku2) + ' - ' + Convert(varchar,Description) +'],' FROM OrderDetailDeliveryReview Inner Join InvMast on SKU2 = SKU and LocationTypeID=4 GROUP BY Sku2 , Description ORDER BY Sku2 Set @column_list = Left(@column_list,Len(@column_list)-1) Select @column_list ---------------------------------------- 1 row is returned: ,SUM(Case When Sku2=157 Then Quantity Else 0 End) As [157 -..., SUM(Case ... The T-SQL code does exactly what I want, which is to make a single result based on the results of a query, which will then be used in another query. However, I can't figure out how the SELECT @column_list =... statement is putting multiple values into a single string of characters by being inside a SELECT statement. Without the assignment to @column_list, the SELECT statement would simply return multiple rows. How is it that by having the variable within the SELECT statement that the results get "flattened" down into one value? How should I read this T-SQL to properly understand what's going on?

    Read the article

  • Help with a t-sql query

    - by user324650
    Hi Based on the following table Path ---------------------- area1 area1\area2 area1\area2\area3 area1\area2\area3\area4 area1\area2\area5 area1\area2\area6 area1\area7 Input to my stored procedure is areapath and no.of children (indicates the depth that needs to considered from the input areapath) areapath=area1 children=2 Above should give Path ----------- area1 area1\area2 area1\area2\area3 area1\area2\area5 area1\area2\area6 area1\area7 similary for areapath=area2 and children=1 output should be Path --------------- area1\area2 area1\area2\area3 area1\area2\area5 area1\area2\area6 I am confused how to write a query for this one.

    Read the article

  • SQL - Two foreign keys that have a dependency between them

    - by Brian
    The current structure is as follows: Table RowType: RowTypeID Table RowSubType: RowSubTypeID FK_RowTypeID Table ColumnDef: FK_RowTypeID FK_RowSubTypeID (nullable) In short, I'm mapping column definitions to rows. In some cases, those rows have subtype(s), which will have column definitions specific to them. Alternatively, I could hang those column definitions that are specific to subtypes off their own table, or I could combine the data in RowType and RowSubType into one table and work with a single ID, but I'm not sure either is a better solution (if anything, I'd lean towards the latter, as we mostly end up pulling ColumnDefs for a given RowType/RowSubType). Is the current design SQL Blasphemy? If I keep the current structuree, how do I maintain that if RowSubTypeID is specified in ColumnDef, that it must correspond to the RowType specified by RowTypeID? Should I try to enforce this with a trigger or am I missing a simple redesign that would solve the problem?

    Read the article

  • Copy SQL 2005 view result column headers?

    - by DonnMt
    Is there a way to copy View output column headers along with the data? There is a setting in Options to include column headers with query results, but that only works with "New Query" and Stored Procedure output. Looks like SSMS 2008 has this functionality built in to the contextual menu when you right click on results, but I only have 2005. Am I out of luck? Thanks for any help.

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >