Search Results

Search found 95301 results on 3813 pages for 'server client'.

Page 378/3813 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • sending a packet to multiple client at a time from udp socket

    - by mawia
    Hi! all, I was trying to write a udp server who send an instance of a file to multiple clients.Now suppose any how I manage to know the address of those client statically(for the sake of simplicity) and now I want to send this packet to these addresses.So how exactly I need to fill the sockaddr structure to contain the address of these clients.I am taking an array of sockaddr structure(to contain client address) and trying to send at each one them at a time.Now problem is to fill the individual sockaddr structure to contain the client address. I was trying to do something like that sa[1].sin_family = AF_INET; sa[1].sin_addr.s_addr = htonl(INADDR_ANY);//should'nt I replace this INADDR_ANY with client ip?? sa[1].sin_port = htons(50002); Correct me if this is not the correct way. All your help in this regard will be highly appreciated. With Thanks in advance, Mawia

    Read the article

  • Please help me in creating an update query

    - by Rajesh Rolen- DotNet Developer
    I have got a table which contains 5 column and query requirements: update row no 8 (or id=8) set its column 2, column 3's value from id 9th column 2, column 3 value. means all value of column 2, 3 should be shifted to column 2, 3 of upper row (start from row no 8) and value of last row's 2, 3 will be null For example, with just 3 rows, the first row is untouched, the second to N-1th rows are shifted once, and the Nth row has nulls. id math science sst hindi english 1 11 12 13 14 15 2 21 22 23 24 25 3 31 32 33 34 35 The result of query of id=2 should be: id math science sst hindi english 1 11 12 13 14 15 2 31 32 23 24 25 //value of 3rd row (col 2,3) shifted to row 2 3 null null 33 34 35 This process should run for all rows whose id 2 Please help me to create this update query I am using MS sqlserver 2005

    Read the article

  • UPDATE from SELECT complains about more that one value returned

    - by Álvaro G. Vicario
    I have this data structure: request ======= building_id lot_code building ======== building_id lot_id lot === lot_id lot_code The request table is missing the value for the building_id column and I want to fill it in from the other tables. So I've tried this: UPDATE request SET building_id = ( SELECT bu.building_id FROM building bu INNER JOIN lot lo ON bu.lot_id=lo.lot_id WHERE lo.lot_code = request.lot_code ); But I'm getting this error: Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , , = or when the subquery is used as an expression. Is it due to wrong syntax? The data model allows more than one building per lot but actual data doesn't contain such cases so there should be at most one building_id per lot_code.

    Read the article

  • Using current database name in T-SQL has Using statement

    - by AmRoSH
    Hello everybody. I have application runs T-SQL statements to update more than one database the problem is i'm using the following t-sql USE [msdb] GO DECLARE @jobId BINARY(16) EXEC msdb.dbo.sp_add_job @job_name=N'test2', @enabled=1, @start_step_id=1, @notify_level_eventlog=0, @notify_level_email=2, @notify_level_netsend=2, @notify_level_page=2, @delete_level=0, @description=N'', @category_name=N'[Uncategorized (Local)]', @owner_login_name=N'sa', @notify_email_operator_name=N'', @notify_netsend_operator_name=N'', @notify_page_operator_name=N'', @job_id = @jobId OUTPUT select @jobId GO EXEC msdb.dbo.sp_add_jobserver @job_name=N'test2', @server_name = N'AMR-PC\SQL2008' GO USE [msdb] GO EXEC msdb.dbo.sp_add_jobstep @job_name=N'test2', @step_name=N'test', @step_id=1, @cmdexec_success_code=0, @on_success_action=1, @on_fail_action=2, @retry_attempts=0, @retry_interval=0, @os_run_priority=0, @subsystem=N'TSQL', @command=N'EXEC sp_MSforeachdb '' EXEC sp_MSforeachtable @command1=''''DBCC DBREINDEX (''''''''*'''''''')'''', @replacechar=''''*''''''', @database_name=N'Client5281', @output_file_name=N'C:\Documents and Settings\Amr\Desktop\Scheduel Reports\report', @flags=2 GO USE [msdb] GO DECLARE @schedule_id int EXEC msdb.dbo.sp_add_jobschedule @job_name=N'test2', @name=N'test', @enabled=1, @freq_type=8, @freq_interval=1, @freq_subday_type=1, @freq_subday_interval=0, @freq_relative_interval=0, @freq_recurrence_factor=1, @active_start_date=20100517, @active_end_date=99991231, @active_start_time=0, @active_end_time=235959, @schedule_id = @schedule_id OUTPUT select @schedule_id GO and i'm using (USE [msdb]) before any block and i want to get database name to replace this @database_name=N'**Client5281**', with the current database name instead of ([msdb]) that i'm using. i hope that i explained what i want well.

    Read the article

  • Asp.net Report Viewer - Custom filter parameters

    - by Chris
    Hi all, for a data warehouse project I need to know about some best practices regarding custom report viewer filters/parameters. Usually I use the standard parameter feature for reports, like multiple select boxes, check boxes, text boxes etc.. But for the current project some reports require more complex report parameters. E.g. a user wants to analyze some measures. For that the user needs to set a filter on a specific address. There are over 100.000 address to choose from, so he has to have the ability to search for an address (full text). Since such features cannot be done with the standard parameters, I will have to create custom params within a ASPX page which are then passed to the report viewer control. So my question is: Are there any best practices on how to create custom parameters? Did anyone had similar problems, if so, how did you solve it?

    Read the article

  • Best way to Sort-n-Concatenate 5 columns

    - by SDReyes
    Having five columns containing numeric values A | B | C | D | E ------------------- 2 | 3 | 4 | 1 | 5 3 | 6 | 1 | 5 | 4 4 | 5 | 7 | 1 | 3 I want to obtain the concatenation of the values after sort them: ABCDE ----------- 1 2 3 4 5 1 3 4 5 6 1 3 4 5 7 What is the best way to do it?

    Read the article

  • How to do a case sensitive GROUP BY?

    - by Abe Miessler
    If I execute the code below: with temp as ( select 'Test' as name UNION ALL select 'TEST' UNION ALL select 'test' UNION ALL select 'tester' UNION ALL select 'tester' ) SELECT name, COUNT(name) FROM temp group by name It returns the results: TEST 3 tester 2 Is there a way to have the group by be case sensitive so that the results would be: Test 1 TEST 1 test 1 tester 2

    Read the article

  • Disable Primary Key and Re-Enable After SQL Bulk Insert

    - by Jon
    I am about to run a massive data insert into my DB. I have managed to work out how to enable and rebuild non-clustered indexes on my tables but I also want to disable/enable primary keys. You can't disable the clustered index for the primary key as the table is inaccessible when that is done and my attempt to do a ALTER TABLE for constraints does not work as I think that is only for foreign keys. Do you know of a way to Disable the Primary Key and Re-Enable After SQL Bulk Insert. NOTE: This is over numerous tables and so I don't know the exact primary key specifications eg/name etc

    Read the article

  • C# TCP Client/Server communication issue

    - by Jamie
    What i'm currently trying to do is make a very basic webchat for irc using silverlight. Basically how i'm trying to do it is have a tcp server listening for connections from silverlight. When a client connects it creates a new connection to irc and data is passed to/from the client/irc via the server application. I've gotten it to work fine for one client connection, but as soon as two (or more) clients connect multiple connections are made to irc but all data passed from the clients just goes through the latest irc connection (if that makes sense). For example Client1, Client2 and Client3 are all connected to irc, but no matter who sends data it all comes through Client3. Between the client and server app it recongises the data coming in from different clients so i believe the problems lies within the way i've connected to the irc. When the TCP server accepts a new client a new thread is made to listen to incoming data, and from there a new thread is made to connect to irc. I'm sure thats where the problem exists, but i've confused myself a lot now and am wondering if anyone can help me figure out a solution.

    Read the article

  • Efficiently select top row for each category in the set

    - by VladV
    I need to select a top row for each category from a known set (somewhat similar to this question). The problem is, how to make this query efficient on the large number of rows. For example, let's create a table that stores temperature recording in several places. CREATE TABLE #t ( placeId int, ts datetime, temp int, PRIMARY KEY (ts, placeId) ) -- insert some sample data SET NOCOUNT ON DECLARE @n int, @ts datetime SELECT @n = 1000, @ts = '2000-01-01' WHILE (@n>0) BEGIN INSERT INTO #t VALUES (@n % 10, @ts, @n % 37) IF (@n % 10 = 0) SET @ts = DATEADD(hour, 1, @ts) SET @n = @n - 1 END Now I need to get the latest recording for each of the places 1, 2, 3. This way is efficient, but doesn't scale well (and looks dirty). SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 1 ORDER BY ts DESC ) t1 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 2 ORDER BY ts DESC ) t2 UNION ALL SELECT * FROM ( SELECT TOP 1 placeId, temp FROM #t WHERE placeId = 3 ORDER BY ts DESC ) t3 The following looks better but works much less efficiently (30% vs 70% according to the optimizer). SELECT placeId, ts, temp FROM ( SELECT placeId, ts, temp, ROW_NUMBER() OVER (PARTITION BY placeId ORDER BY ts DESC) rownum FROM #t WHERE placeId IN (1, 2, 3) ) t WHERE rownum = 1 The problem is, during the latter query execution plan a clustered index scan is performed on #t and 300 rows are retrieved, sorted, numbered, and then filtered, leaving only 3 rows. For the former query three times one row is fetched. Is there a way to perform the query efficiently without lots of unions?

    Read the article

  • converting ID to column name and also replacing NULL with last known value.

    - by stackoverflowuser
    TABLE_A Rev ChangedBy ----------------------------- 1 A 2 B 3 C TABLE_B Rev Words ID ---------------------------- 1 description_1 52 1 history_1 54 2 description_2 52 3 history_2 54 Words column datatype is ntext. TABLE_C ID Name ----------------------------- 52 Description 54 History OUTPUT Rev ChangedBy Description History ------------------------------------------------ 1 A description_1 history_1 2 B description_2 history_1 3 C description_2 history_2 Description and History column will have the previous known values if they dont have value for that Rev no. i.e. Since for Rev no. 3 Description does not have an entry in TABLE_B hence the last known value description_2 appears in that column for Rev no. 3 in the output.

    Read the article

  • Repeat Customers Each Year (Retention)

    - by spazzie
    I've been working on this and I don't think I'm doing it right. |D Our database doesn't keep track of how many customers we retain so we looked for an alternate method. It's outlined in this article. It suggests you have this table to fill in: Year Number of Customers Number of customers Retained in 2009 Percent (%) Retained in 2009 Number of customers Retained in 2010 Percent (%) Retained in 2010 .... 2008 2009 2010 2011 2012 Total The table would go out to 2012 in the headers. I'm just saving space. It tells you to find the total number of customers you had in your starting year. To do this, I used this query since our starting year is 2008: select YEAR(OrderDate) as 'Year', COUNT(distinct(billemail)) as Customers from dbo.tblOrder where OrderDate >= '2008-01-01' and OrderDate <= '2008-12-31' group by YEAR(OrderDate) At the moment we just differentiate our customers by email address. Then you have to search for the same names of customers who purchased again in later years (ours are 2009, 10, 11, and 12). I came up with this. It should find people who purchased in both 2008 and 2009. SELECT YEAR(OrderDate) as 'Year',COUNT(distinct(billemail)) as Customers FROM dbo.tblOrder o with (nolock) WHERE o.BillEmail IN (SELECT DISTINCT o1.BillEmail FROM dbo.tblOrder o1 with (nolock) WHERE o1.OrderDate BETWEEN '2008-1-1' AND '2009-1-1') AND o.BillEmail IN (SELECT DISTINCT o2.BillEmail FROM dbo.tblOrder o2 with (nolock) WHERE o2.OrderDate BETWEEN '2009-1-1' AND '2010-1-1') --AND o.OrderDate BETWEEN '2008-1-1' AND '2013-1-1' AND o.BillEmail NOT LIKE '%@halloweencostumes.com' AND o.BillEmail NOT LIKE '' GROUP BY YEAR(OrderDate) So I'm just finding the customers who purchased in both those years. And then I'm doing an independent query to find those who purchased in 2008 and 2010, then 08 and 11, and then 08 and 12. This one finds 2008 and 2010 purchasers: SELECT YEAR(OrderDate) as 'Year',COUNT(distinct(billemail)) as Customers FROM dbo.tblOrder o with (nolock) WHERE o.BillEmail IN (SELECT DISTINCT o1.BillEmail FROM dbo.tblOrder o1 with (nolock) WHERE o1.OrderDate BETWEEN '2008-1-1' AND '2009-1-1') AND o.BillEmail IN (SELECT DISTINCT o2.BillEmail FROM dbo.tblOrder o2 with (nolock) WHERE o2.OrderDate BETWEEN '2010-1-1' AND '2011-1-1') --AND o.OrderDate BETWEEN '2008-1-1' AND '2013-1-1' AND o.BillEmail NOT LIKE '%@halloweencostumes.com' AND o.BillEmail NOT LIKE '' GROUP BY YEAR(OrderDate) So you see I have a different query for each year comparison. They're all unrelated. So in the end I'm just finding people who bought in 2008 and 2009, and then a potentially different group that bought in 2008 and 2010, and so on. For this to be accurate, do I have to use the same grouping of 2008 buyers each time? So they bought in 2009 and 2010 and 2011, and 2012? This is where I'm worried and not sure how to proceed or even find such data. Any advice would be appreciated! Thanks!

    Read the article

  • How to configure apache to basic authentication or allow when ntlm while proxying?

    - by trotzim
    Here is my study case: browser --- apache proxy --- ISA server --- internet The ISA server requires an authentication. The issue is to allow HTTPS through the two proxies. A configuration that works with HTTP is something like this: (yes, I don't want to use ProxyPass but ProxyRequests) <virtualhost *:8080> ... SetEnv auth-proxy-chain on ... ProxyRequests On ProxyRemote * http://isaproxy:80 ... <proxy *> AuthName "ISA server auth" AuthType Basic [here a module to authenticate] require valid-user Allow from all </proxy> ... </virtualhost> The user can authenticate on the apache proxy then the authentication chain is sent to the ISA server that allows the HTTP trafic. But, while the browser switchs to HTTPS, the ISA server "speaks" NTLM and breaks the authentication on the apache proxy. If I try to use the SSPI module (ntlm) with something like this: blablabla <proxy *> AuthName "ISA server auth" AuthType ntlm [ SSPI stuff ] Require valid-user Allow from all </proxy> The apache server reject the authentication (or the ISA server I don't really know). I use wireshark to look at the nominal process while using directly the ISA server as proxy. The first auth-chain is a BASIC type then it switchs to NTLM (and the challenge continues with NTLM). How should I configure apache that it transfers the NTLM authentication to the ISA proxy without checking it(*)? Or to rewrite headers to force BASIC authentication? (*) It seems not to be as easy as it seems...

    Read the article

  • Using client time to calculate timezone

    - by Mike TK
    Hi Folks, Instead of asking a client timezone in registration form (to correctly format datetime, all server dates in UTC) I thought about fetching a time from client computer and calculating time offset between client and server. Anyone tried this? How often clients have something insane on their system clocks? Cheers!

    Read the article

  • Ordering sql query results

    - by user343409
    My sql query gives the columns: product_id (which is an integer) pnl (which is float - can be negative) I get more than 100 rows. I want to filter out the top 40 rows based on abs(pnl). But the results should be ordered by pnl column only and not by abs(pnl). I want to do this for MSSQL 2005. Is there a way to do this?

    Read the article

  • sql query where parameters null not null

    - by Laziale
    I am trying to do a sql query and to build the where condition dynamically depending if the parameters are null or no. I have something like this: SELECT tblOrder.ProdOrder, tblOrder.Customer FROM tblOrder CASE WHEN @OrderId IS NOT NULL THEN WHERE tblOrder.OrderId = @OrderId ELSE END CASE WHEN @OrderCustomer IS NOT NULL THEN AND tblOrder.OrderCustomer = @OrderCustomer ELSE END END This doesn't work, but this is just a small prototype how to assemble the query, so if the orderid is not null include in the where clause, or if the ordercustomer is not null include in the where clause. But I see problem here, for example if the ordercustomer is not null but the orderid is null, there will be error because the where keyword is not included. Any advice how I can tackle this problem. Thanks in advance, Laziale

    Read the article

  • Return dataset in dataflow

    - by praveen
    Hi All, Could I get ideas on retrieving the dataset using lookup method. Basically, my scenario as I have source data needs to lookup for other source table and on matching column from source I need to get all the records from other source data. its a one to many relations. I tried Lookup but gives only one record on matching condition, OLE DB command don't retrieve any data as it will do only Insert/Update operations. Thanks prav

    Read the article

  • SQL Group by Minute- Expanded

    - by Barnie
    I am working on something similar to this post here: TS SQL - group by minute However mine is pulling from an message queue, and I need to see an accurate count of the amount of traffic the Message Queue is creating/ sending, and at what time Select * From MessageQueue mq My expanded version of this though is the following: A) User defines a start time and an end time (Easy enough using Declare's @StartTime and @EndTime B) Give the user the option of choosing the "grouping". Will it be broken out by 1 minutes, 5 minutes, 15 minutes, or 30 minutes (Max). (I had thought to do this with a CASE statement, but my test problems fall apart on me.) C) Display the data to accurately show a count of what happened during the interval (Grouping) selected. This is where I am at so far SQL Blob: DECLARE @StartTime datetime DECLARE @EndTime datetime SELECT DATEPART(n, mq.cre_date)/5 as Time --Trying to just sort by 5 minute intervals ,CONVERT(VARCHAR(10),mq.Cre_Date,101) ,COUNT(*) as results FROM dbo.MessageQueue mq WHERE mq.cre_date BETWEEN @StartDate AND @EndDate GROUP BY DATEPART(n, mq.cre_date)/5 --Trying to just sort by 5 minute intervals , eq.Cre_Date This is the output I would like to achieve: [Time] [Date] [Message Count] 1300 06/26/2012 5 1305 06/26/2012 1 1310 06/26/2012 100

    Read the article

  • XML RPC client for C# over secured socket (https)

    - by Ummar
    I have a secured (https) XML-RPC server written in python, and I have tested it with a python based client. but I need a C# based client for it, I have given a try to xml-rpc.net but it is not working with https? can any one please help me out? or I will have to write a client from scratch? Thanks

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >