Search Results

Search found 35784 results on 1432 pages for 'number format'.

Page 556/1432 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • im writing a spellchecking program, how do i replace ch in a string..eg..

    - by Ajay Hopkins
    what am i doing wrong/what can i do?? import sys import string def remove(file): punctuation = string.punctuation for ch in file: if len(ch) > 1: print('error - ch is larger than 1 --| {0} |--'.format(ch)) if ch in punctuation: ch = ' ' return ch else: return ch ref = (open("ref.txt","r")) test_file = (open("test.txt", "r")) dictionary = ref.read().split() file = test_file.read().lower() file = remove(file) print(file) p.s, this is in Python 3.1.2

    Read the article

  • How do I check if a variable has been initialized

    - by Javier Parra
    First of all, I'm fairly new to Java, so sorry if this question is utterly simple. The thing is: I have a String[] s made by splitting a String in which every item is a number. I want to cast the items of s into a int[] n. s[0] contains the number of items that n will hold, effectively s.length-1. I'm trying to do this using a foreach loop: int[] n; for(String num: s){ //if(n is not initialized){ n = new int[(int) num]; continue; } n[n.length] = (int) num; } Now, I realize that I could use something like this: int[] n = new int[(int) s[0]]; for(int i=0; i < s.length; i++){ n[n.length] = (int) s[i]; } But I'm sure that I will be faced with that "if n is not initialized initialize it" problem in the future.

    Read the article

  • how to identify a file is a text file or other using c#.net

    - by bjh Hans
    I need to access a file as text file and want to process it later. But before I fetch it how I can identify a file that I am taking is a text file only. If file is in another format my whole code interpret wrongly. I want to access and process only text file. Currently i am using: StreamReader objReader = new StreamReader(filePath); How can I do so in C# .NET?

    Read the article

  • SQL query to translate a list of numbers matched against several ranges, to a list of values

    - by Claes Mogren
    I need to convert a list of numbers that fall within certain ranges into a list of values, ordered by a priority column. The table has the following values: | YEAR | R_MIN | R_MAX | VAL | PRIO | ------------------------------------ 2010 18000 90100 52 6 2010 240000 240099 82 3 2010 250000 259999 50 5 2010 260000 260010 92 1 2010 330000 330010 73 4 2010 330011 370020 50 5 2010 380000 380050 84 2 The ranges will be different for different years. The ranges within one year will never overlap. The input will be a year and a list of numbers that might fall within one these ranges. The list of input number will be small, 1 to 10 numbers. Example of input numbers: (20000, 240004, 375000, 255000) With that input I would like to get a list ordered by the priority column, or a single value: 82 50 52 The only value I'm interested in here is 82, so UNIQUE and MAX_RESULTS=1 would do. It can easily be done with one query per number, and then sorting it in the Java code, but I would prefer to do it in a single SQL query. What SQL query, to be run in an Oracle database, would give me the desired result?

    Read the article

  • How to insert custom date time in oracle using java?

    - by shree
    Hi i have a column (type date).I want to insert custom date and time without using Preparedstatement .i have used String date = sf.format(Calendar.getInstance().getTime()); String query = "Insert into entryTbl(name, joinedDate, ..etc) values ("abc", to_date(date, 'yyyy/mm/dd HH:mm:ss'))"; statement.executeUpdate(query); but am getting literal doesnot match error. so even tried with "SYSDATE".Its inserting only date not time.So how to insert the datetime using java into oracle?please any one help..

    Read the article

  • Drawing and filling different polygons at the same time in MATLAB

    - by Hossein
    Hi,I have the code below. It load a CSV file into memory. This file contains the coordinates for different polygons.Each row of this file has X,Y coordinates and a string which tells that to which polygon this datapoint belongs. for example a polygone named "Poly1" with 100 data points has 100 rows in this file like : Poly1,X1,Y1 Poly1,X2,Y2 ... Poly1,X100,Y100 Poly2,X1,Y1 ..... The index.csv file has the number of datapoint(number of rows) for each polygon in file Polygons.csv. These details are not important. The thing is: I can successfully extract the datapoints for each polygon using the code below. However, When I plot the lines of different polygons are connected to each other and the plot looks crappy. I need the polygons to be separated(they are connected and overlapping the some areas though). I thought by using "fill" I can actually see them better. But "fill" just filles every polygon that it can find and that is not desirable. I only want to fill inside the polygons. Can someone help me? I can also send you my datapoint if necessary, they are less than 200Kb. Thanks [coordinates,routeNames,polygonData] = xlsread('Polygons.csv'); index = dlmread('Index.csv'); firstPointer = 0 lastPointer = index(1) for Counter=2:size(index) firstPointer = firstPointer + index(Counter) + 1 hold on plot(coordinates(firstPointer:lastPointer,2),coordinates(firstPointer:lastPointer,1),'r-') lastPointer = lastPointer + index(Counter) end

    Read the article

  • Detect if HTTP request is from browser / Flex asynchronous request?

    - by Andree
    Hi there! When Flex application make an asynchronus HTTP request, does it add a special header to the request, like some JavaScript framework does? Something that indicates whether this request is an AJAX call/not. I just want my server side code to return different response format, depending on whether the request is made from browser/flex. Regards, Andree.

    Read the article

  • How to change CSS color values in real-time off a javascript slider?

    - by bflora
    I'm making a page where the user gets a javascript slider that goes from 0 to 100 and can use it to set the opacity of a div on the page. I want the opacity of that div to change in real-time as they work the slider. I've not done this before. What's the best approach? There cursor in the slider displays the slider's current value as you move it. It seems to be that I just need to find a way to display that value in any arbitrary other place on the page so I can display it in the style settings for the div. The .js file that generates the slider has a line that (I think) is setting the current value in the cursor: $(this).children(".ui-slider-handle", context).html(parseInt(settings[index]['default'])); TO get this changing number to display somewhere else at the same time, do I just need to add a div somewhere and then add a line like this? $("#newDivId").children(".ui-slider-handle", context).html(parseInt(settings[index]['default'])); That seems like it would give me the number showing up in a div. How then would I get it into a form I could put into the style settings for a div? If this was a php variable, I would do something like this, style="opacity:<?php print $value ?>;" What would be the .js equivalent?

    Read the article

  • deleting object with template for int and object

    - by Yokhen
    Alright so Say I have a class with all its definition, bla bla bla... template <class DT> class Foo{ private: DT* _data; //other stuff; public: Foo(DT* data){ _data = data } virtual ~Foo(){ delete _data } //other methods }; And then I have in the main method: int main(){ int number = 12; Foo<anyRandomClass>* noPrimitiveDataObject = new Foo<anyRandomClass>(new anyRandomClass()); Foo<int>* intObject = new Foo<int>(number); delete noPrimitiveDataObject; //Everything goes just fine. delete intObject; //It messes up here, I think because primitive data types such as int are allocated in a different way. return 0; } My question is: What could I do to have both delete statements in the main method work just fine? P.S.: Although I have not actually compiled/tested this specific code, I have reviewed it extensively (as well as indented. You're welcome.), so if you find a mistake, please be nice. Thank you.

    Read the article

  • Localized Date Validator

    - by Blithe
    Is there a way to use user's culture to localize the Range Validator for date? I am looking for a good way to validate date and avoiding to provide a fix format (e.g.: do a dd/mm/yyyy using Regular Expression Validator)

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • not use "using" statement for TransactionScope

    - by hotyi
    i always using the following format to use transactionscope. using(TransactionScope scope = new TransactionScope()){ .... } sometimes i want to wrap the transactionscope to a new class, for example DbContext class, i want to using the statement like dbContext.Begin(); ... dbContext.Submit(); it seems the transactioncope class need use "using"statement to do dispose, i want to know if there is anyway not use "using".

    Read the article

  • JEXCelAPI on android

    - by user121196
    When I added the JExcelAPI http://jexcelapi.sourceforge.net/ to the class path and run my app, I get: trouble writing output: shouldn't happen [2009-07-16 14:32:19 - xxx] Conversion to Dalvik format failed with error 2 any idea? thanks

    Read the article

  • string auto splitting in each loop - jquery

    - by sluggerdog
    I have the following jquery code that is looping through the returned json data, for some reason is it splitting the suburb by a space when being assigned as the value but not as the text, I cannot work out why this is happening. MY CODE $.each(data , function( index, obj ) { $.each(obj, function( key, value ) { var suburb = $.trim(value['mcdl01']); var number = $.trim(value['mcmcu']); $("#FeedbackBranchName").append("<option value=" + suburb + ">" + suburb + " (" + number + ")</option>"); }); }); SAMPLE RETURNED RESULTS <option **value="AIRLIE" beach=""**>AIRLIE BEACH (4440)</option> <option value="ASHMORE">ASHMORE (4431)</option> <option **value="BANYO" commercial=""**>BANYO COMMERCIAL (4432)</option> <option value="BEENLEIGH">BEENLEIGH (4413)</option> <option value="BERRIMAH">BERRIMAH (4453)</option> <option **value="BOWEN" hills=""**>BOWEN HILLS (4433)</option> Notice how for AIRLEE BEACH, BANYO COMMERICAL AND BOWN HILLS the second word has been separated out from the value attribute but it's fine at the text level. Anyone have any idea why this might happen? Thanks

    Read the article

  • How to concatenate all commit messages from subversion into one text file with no metadata?

    - by user144182
    I would like to take all the commit messages in my subversion log and just concatenate them into one text file. Each commit message has this format: - r1 message - r1 message - r1 message What I would like is something like: - r1 message - r1 message - r2 message - r2 message - r3 message [...] - r1000 message Update I thought the above was clear, but what I don't want in the log is this type of info: r2130 | user| 2010-03-19 10:36:13 -0400 (Fri, 19 Mar 2010) | 1 line No meta data, I simply want the commit messages.

    Read the article

  • Determine stale data

    - by Andrei
    Say I have a file of this format 12:04:21 .3 12:10:21 1.3 12:13:21 1.4 12:14:21 1.3 ..and so on I want to find repeated numbers in the second column for, say, 10 consequent timestamps, thereby finding staleness. and I want to output the beginning and and end of the stale timestamp range Can someone help me come up with it? You can use awk, bash Thanks

    Read the article

  • In MS Access form, how to color background of selected record?

    - by PowerUser
    I have a somewhat complicated looking Access Form with a continuous display (meaning multiple records are shown at once). I'd like to change the background color of the selected record only so the end-user can easily tell which record they are on. I'm thinking of perhaps a conditional format or maybe something like this: Private Sub Detail_HasFocus() Detail.BackColor(me.color)=vbBlue End Sub and something similar for when that row loses focus. This code snippet obviously won't work, but it's the kind of code I'd like to achieve.

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • spatial indexing mysql

    - by shaiju
    i have to integrate spatial indexing in mysql .i got an example i almost done it.in that example INSERT INTO address VALUES('Foobar street 12', GeomFromText('POINT(2671 2500)')); inplace of 2671 and 2500 i have to insert latitude and longitude in below format 35.177 ,-77.11. How is it possibe .Please help me

    Read the article

  • trying to append a list, but something breaks

    - by romunov
    I'm trying to create an empty list which will have as many elements as there are num.of.walkers. I then try to append, to each created element, a new sub-list (length of new sub-list corresponds to a value in a. When I fiddle around in R everything goes smooth: list.of.dist[[1]] <- vector("list", a[1]) list.of.dist[[2]] <- vector("list", a[2]) list.of.dist[[3]] <- vector("list", a[3]) list.of.dist[[4]] <- vector("list", a[4]) I then try to write a function. Here is my feeble attempt that results in an error. Can someone chip in what am I doing wrong? countNumberOfWalks <- function(walk.df) { list.of.walkers <- sort(unique(walk.df$label)) num.of.walkers <- length(unique(walk.df$label)) #Pre-allocate objects for further manipulation list.of.dist <- vector("list", num.of.walkers) a <- c() # Count the number of walks per walker. for (i in list.of.walkers) { a[i] <- nrow(walk.df[walk.df$label == i,]) } a <- as.vector(a) # Add a sublist (length = number of walks) for each walker. for (i in i:num.of.walkers) { list.of.dist[[i]] <- vector("list", a[i]) } return(list.of.dist) } > num.of.walks.per.walker <- countNumberOfWalks(walk.df) Error in vector("list", a[i]) : vector size cannot be NA

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >