Search Results

Search found 35784 results on 1432 pages for 'number format'.

Page 556/1432 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • how to identify a file is a text file or other using c#.net

    - by bjh Hans
    I need to access a file as text file and want to process it later. But before I fetch it how I can identify a file that I am taking is a text file only. If file is in another format my whole code interpret wrongly. I want to access and process only text file. Currently i am using: StreamReader objReader = new StreamReader(filePath); How can I do so in C# .NET?

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • How are developers using source control, I am trying to find the most efficient way to do source con

    - by RJ
    I work in a group of 4 .Net developers. We rarely work on the same project at the same time but it does happen from time to time.We use TFS for source control. My most recent example is a project I just placed into production last night that included 2 WCF services and a web application front end. I worked out of a branch called "prod" because the application is brand new and has never seen the light of day. Now that the project is live, I need to branch off the prod branch for features, bugs, etc... So what is the best way to do this? Do I simple create a new branch and sort of archive the old branch and never use it again? Do I branch off and then merge my branch changes back into the prod branch when I want to deploy to production? And what about the file and assembly version. They are currently at 1.0.0.0. When do they change and why? If I fix a small bug, which number changes if any? If I add a feature, which number changes if any? What I am looking for is what you have found to be the best way to efficiently manage source control. Most places I have worked always seem to bang heads with the source control system in on way or another and I would just like to find out what you have found that works the best.

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • Drawing and filling different polygons at the same time in MATLAB

    - by Hossein
    Hi,I have the code below. It load a CSV file into memory. This file contains the coordinates for different polygons.Each row of this file has X,Y coordinates and a string which tells that to which polygon this datapoint belongs. for example a polygone named "Poly1" with 100 data points has 100 rows in this file like : Poly1,X1,Y1 Poly1,X2,Y2 ... Poly1,X100,Y100 Poly2,X1,Y1 ..... The index.csv file has the number of datapoint(number of rows) for each polygon in file Polygons.csv. These details are not important. The thing is: I can successfully extract the datapoints for each polygon using the code below. However, When I plot the lines of different polygons are connected to each other and the plot looks crappy. I need the polygons to be separated(they are connected and overlapping the some areas though). I thought by using "fill" I can actually see them better. But "fill" just filles every polygon that it can find and that is not desirable. I only want to fill inside the polygons. Can someone help me? I can also send you my datapoint if necessary, they are less than 200Kb. Thanks [coordinates,routeNames,polygonData] = xlsread('Polygons.csv'); index = dlmread('Index.csv'); firstPointer = 0 lastPointer = index(1) for Counter=2:size(index) firstPointer = firstPointer + index(Counter) + 1 hold on plot(coordinates(firstPointer:lastPointer,2),coordinates(firstPointer:lastPointer,1),'r-') lastPointer = lastPointer + index(Counter) end

    Read the article

  • Localized Date Validator

    - by Blithe
    Is there a way to use user's culture to localize the Range Validator for date? I am looking for a good way to validate date and avoiding to provide a fix format (e.g.: do a dd/mm/yyyy using Regular Expression Validator)

    Read the article

  • Entity Framework autoincrement key

    - by Tommy Ong
    I'm facing an issue of duplicated incremental field on a concurrency scenario. I'm using EF as the ORM tool, attempting to insert an entity with a field that acts as a incremental INT field. Basically this field is called "SequenceNumber", where each new record before insert, will read the database using MAX to get the last SequenceNumber, append +1 to it, and saves the changes. Between the getting of last SequenceNumber and Saving, that's where the concurrency is happening. I'm not using ID for SequenceNumber as it is not a unique constraint, and may reset on certain conditions such as monthly, yearly, etc. InvoiceNumber | SequenceNumber | DateCreated INV00001_08_14 | 1 | 25/08/2014 INV00001_08_14 | 1 | 25/08/2014 <= (concurrency is creating two SeqNo 1) INV00002_08_14 | 2 | 25/08/2014 INV00003_08_14 | 3 | 26/08/2014 INV00004_08_14 | 4 | 27/08/2014 INV00005_08_14 | 5 | 29/08/2014 INV00001_09_14 | 1 | 01/09/2014 <= (sequence number reset) Invoice number is formatted based on the SequenceNumber. After some research I've ended up with these possible solutions, but wanna know the best practice 1) Optimistic Concurrency, locking the table from any reads until the current transaction is completed (not fancy of this idea as I guess performance will be of a great impact?) 2) Create a Stored Procedure solely for this purpose, does select and insert on a single statement as such concurrency is at minimum (would prefer a EF based approach if possible)

    Read the article

  • How to send raw XML in Python?

    - by davywahd
    Hi, I am trying to send raw xml to a service in Python. I have a the address of the service and my question is how would I wrap XML in python and send it to the service. The address is in the format below. 192.1100.2.2:54239 And say the XML is: <xml version="1.0" encoding="UTF-8"><header/><body><code><body/> Anyone know what to do?

    Read the article

  • change error_message in Zend_Validate_EmailAddress

    - by Alexandr
    I need change all standart error message on my message in Zend_Element_Text when i use validator('EmailAddress') this validator trows several differnt message. Value is required and can't be empty '' is no valid email address in the basic format local-part@hostname When i set options setErrorMessage('some my error text') it string shows on any error several times. the error looks like some my error text some my error text What the best way to solve this problem ?zf version 1.10.3

    Read the article

  • Determine stale data

    - by Andrei
    Say I have a file of this format 12:04:21 .3 12:10:21 1.3 12:13:21 1.4 12:14:21 1.3 ..and so on I want to find repeated numbers in the second column for, say, 10 consequent timestamps, thereby finding staleness. and I want to output the beginning and and end of the stale timestamp range Can someone help me come up with it? You can use awk, bash Thanks

    Read the article

  • Any addins for VS2010 to support VS2005 projects?

    - by Eye of Hell
    Hello. Some of the old projects in our company are left to be built with VS2005 in autobuild system (making them build correctly in 2010 cost time). Is it any addins for VS2010 that will allow to open VS2005 project and edit it's files without converting project file itself to VS2010 format (converting will kill autobuild)? Of course i can create a separate project named "xxx_vs2010.vcproj" for each of such products, but that will be a mess :(.

    Read the article

  • How to "check for overwide node(s)." in graphviz dot file

    - by Tomas Forsman
    I'm trying to generate a large graph using graphviz. I have a generated text file with nodes defined in the dot format. When I try to generate a PNG file from the file using dot -Tpng:cairo graph.txt graph.png I get the error message: Error: Edge length 136228 larger than maximum 65535 allowed. Check for overwide node(s). How do I actually "check for overwide node(s)" ?

    Read the article

  • Database Schema for Machine Tags?

    - by Gabriel
    Machine tags are more precise tags: http://www.flickr.com/groups/api/discuss/72157594497877875. They allow a user to basically tag anything as an object in the format object:property=value Any tips on a rdbms schema that implements this? Just wondering if anyone has already dabbled with this. I imagine the schema is quite similar to implementing rdf triples in a rdbms

    Read the article

  • Not getting content in Excel sheet after exporting using DynamicJasper in grails

    - by Ravi
    Hi. I am new to grails and jasper reports. Please help me with the issue. I am trying one example on how to save in excel format the data that we display in grails. I am able to save as excel but not able to get anything inside excel sheet after opening it, not even columns. Refer the link http://www.wysmedia.com/2009/05/dance-with-dynamic-jasper-report/ for the example I am trying. Many thanks.

    Read the article

  • In MS Access form, how to color background of selected record?

    - by PowerUser
    I have a somewhat complicated looking Access Form with a continuous display (meaning multiple records are shown at once). I'd like to change the background color of the selected record only so the end-user can easily tell which record they are on. I'm thinking of perhaps a conditional format or maybe something like this: Private Sub Detail_HasFocus() Detail.BackColor(me.color)=vbBlue End Sub and something similar for when that row loses focus. This code snippet obviously won't work, but it's the kind of code I'd like to achieve.

    Read the article

  • How do I check if a variable has been initialized

    - by Javier Parra
    First of all, I'm fairly new to Java, so sorry if this question is utterly simple. The thing is: I have a String[] s made by splitting a String in which every item is a number. I want to cast the items of s into a int[] n. s[0] contains the number of items that n will hold, effectively s.length-1. I'm trying to do this using a foreach loop: int[] n; for(String num: s){ //if(n is not initialized){ n = new int[(int) num]; continue; } n[n.length] = (int) num; } Now, I realize that I could use something like this: int[] n = new int[(int) s[0]]; for(int i=0; i < s.length; i++){ n[n.length] = (int) s[i]; } But I'm sure that I will be faced with that "if n is not initialized initialize it" problem in the future.

    Read the article

  • How to properly implement the Strategy pattern in a web MVC framework?

    - by jboxer
    In my Django app, I have a model (lets call it Foo) with a field called "type". I'd like to use Foo.type to indicate what type the specific instance of Foo is (possible choices are "Number", "Date", "Single Line of Text", "Multiple Lines of Text", and a few others). There are two things I'd like the "type" field to end up affecting; the way a value is converted from its normal type to text (for example, in "Date", it may be str(the_date.isoformat())), and the way a value is converted from text to the specified type (in "Date", it may be datetime.date.fromtimestamp(the_text)). To me, this seems like the Strategy pattern (I may be completely wrong, and feel free to correct me if I am). My question is, what's the proper way to code this in a web MVC framework? In a client-side app, I'd create a Type class with abstract methods "serialize()" and "unserialize()", override those methods in subclasses of Type (such as NumberType and DateType), and dynamically set the "type" field of a newly-instantiated Foo to the appropriate Type subclass at runtime. In a web framework, it's not quite as straightforward for me. Right now, the way that makes the most sense is to define Foo.type as a Small Integer field and define a limited set of choices (0 = "Number", 1 = "Date", 2 = "Single Line of Text", etc.) in the code. Then, when a Foo object is instantiated, use a Factory method to look at the value of the instance's "type" field and plug in the correct Type subclass (as described in the paragraph above). Foo would also have serialize() and unserialize() methods, which would delegate directly to the plugged-in Type subclass. How does this design sound? I've never run into this issue before, so I'd really like to know if other people have, and how they've solved it.

    Read the article

  • not use "using" statement for TransactionScope

    - by hotyi
    i always using the following format to use transactionscope. using(TransactionScope scope = new TransactionScope()){ .... } sometimes i want to wrap the transactionscope to a new class, for example DbContext class, i want to using the statement like dbContext.Begin(); ... dbContext.Submit(); it seems the transactioncope class need use "using"statement to do dispose, i want to know if there is anyway not use "using".

    Read the article

  • HttpWebRequest possibly slowing website

    - by Steven Smith
    Using Visual studio 2012, C#.net 4.5 , SQL Server 2008, Feefo, Nopcommerce Hey guys I have Recently implemented a new review service into a current site we have. When the change went live the first day all worked fine. Since then though the sending of sales to Feefo hasnt been working, There are no logs either of anything going wrong. In the OrderProcessingService.cs in Nop Commerce's Service, i call a HttpWebrequest when an order has been confirmed as completed. Here is the code. var email = HttpUtility.UrlEncode(order.Customer.Email.ToString()); var name = HttpUtility.UrlEncode(order.Customer.GetFullName().ToString()); var description = HttpUtility.UrlEncode(productVariant.ProductVariant.Product.MetaDescription != null ? productVariant.ProductVariant.Product.MetaDescription.ToString() : "product"); var orderRef = HttpUtility.UrlEncode(order.Id.ToString()); var productLink = HttpUtility.UrlEncode(string.Format("myurl/p/{0}/{1}", productVariant.ProductVariant.ProductId, productVariant.ProductVariant.Name.Replace(" ", "-"))); string itemRef = ""; try { itemRef = HttpUtility.UrlEncode(productVariant.ProductVariant.ProductId.ToString()); } catch { itemRef = "0"; } var url = string.Format("feefo Url", login, password,email,name,description,orderRef,productLink,itemRef); var request = (HttpWebRequest)WebRequest.Create(url); request.KeepAlive = false; request.Timeout = 5000; request.Proxy = null; using (var response = (HttpWebResponse)request.GetResponse()) { if (response.StatusDescription == "OK") { var stream = response.GetResponseStream(); if(stream != null) { using (var reader = new StreamReader(stream)) { var content = reader.ReadToEnd(); } } } } So as you can see its a simple webrequest that is processed on an order, and all product variants are sent to feefo. Now: this hasnt been happening all week since the 15th (day of the implementation) the site has been grinding to a halt recently. The stream and reader in the the var content is there for debugging. Im wondering does the code redflag anything to you that could relate to the process of website? Also note i have run some SQL statements to see if there is any deadlocks or large escalations, so far seems fine, Logs have also been fine just the usual logging of Bots. Any help would be much appreciated! EDIT: also note that this code is in a method that is called and wrapped in A try catch UPDATE: well forget about the "not sending", thats because i was just told my code was rolled back last week

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >