Search Results

Search found 8219 results on 329 pages for 'less'.

Page 257/329 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Efficiency of manually written loops vs operator overloads (C++)

    - by Sagekilla
    Hi all, in the program I'm working on I have 3-element arrays, which I use as mathematical vectors for all intents and purposes. Through the course of writing my code, I was tempted to just roll my own Vector class with simple +, -, *, /, etc overloads so I can simplify statements like: for (int i = 0; i < 3; i++) r[i] = r1[i] - r2[i]; // becomes: r = r1 - r2; Which should be more or less identical in generated code. But when it comes to more complicated things, could this really impact my performance heavily? One example that I have in my code is this: Manually written version: for (int j = 0; j < 3; j++) { p.vel[j] = p.oldVel[j] + (p.oldAcc[j] + p.acc[j]) * dt2 + (p.oldJerk[j] - p.jerk[j]) * dt12; p.pos[j] = p.oldPos[j] + (p.oldVel[j] + p.vel[j]) * dt2 + (p.oldAcc[j] - p.acc[j]) * dt12; } Using a Vector class with operator overloads: p.vel = p.oldVel + (p.oldAcc + p.acc) * dt2 + (p.oldJerk - p.jerk) * dt12; p.pos = p.oldPos + (p.oldVel + p.vel) * dt2 + (p.oldAcc - p.acc) * dt12; I am compiling my code for maximum possible speed, as it's extremely important that this code runs quickly and calculates accurately. So will me relying on my Vector's for these sorts of things really affect me? For those curious, this is part of some numerical integration code which is not trivial to run in my program. Any insight would be appreciated, as would any idioms or tricks I'm unaware of.

    Read the article

  • Is this a bug? Or is it a setting in ASP.NET 4 (or MVC 2)?

    - by John Gietzen
    I just recently started trying out T4MVC and I like the idea of eliminating magic strings. However, when trying to use it on my master page for my stylesheets, I get this: <link href="<%: Links.Content.site_css %>" rel="stylesheet" type="text/css" /> rending like this: <link href="&lt;%: Links.Content.site_css %>" rel="stylesheet" type="text/css" /> Whereas these render correctly: <link href="<%: Url.Content("~/Content/Site.css") %>" rel="stylesheet" type="text/css" /> <link href="<%: Links.Content.site_css + "" %>" rel="stylesheet" type="text/css" /> It appears that, as long as I have double quotes inside of the code segment, it works. But when I put anything else in there, it escapes the leading "less than". Is this something I can turn off? Is this a bug? Edit: This does not happen for <script src="..." ... />, nor does it happen for <a href="...">. Edit 2: Minimal case: <link href="<%: string.Empty %>" /> vs <link href="<%: "" %>" />

    Read the article

  • How can I limit asp.net control actions based on user role?

    - by Duke
    I have several pages or views in my application which are essentially the same for both authenticated users and anonymous users. I'd like to limit the insert/update/delete actions in formviews and gridviews to authenticated users only, and allow read access for both authed and anon users. I'm using the asp.net configuration system for handling authentication and roles. This system limits access based on path so I've been creating duplicate pages for authed and anon paths. The solution that comes to mind immediately is to check roles in the appropriate event handlers, limiting what possible actions are displayed (insert/update/delete buttons) and also limiting what actions are performed (for users that may know how to perform an action in the absence of a button.) However, this solution doesn't eliminate duplication - I'd be duplicating security code on a series of pages rather than duplicating pages and limiting access based on path; the latter would be significantly less complicated. I could always build some controls that offered role-based configuration, but I don't think I have time for that kind of commitment right now. Is there a relatively easy way to do this (do such controls exist?) or should I just stick to path-based access and duplicate pages? Does it even make sense to use two methods of authorization? There are still some pages which are strictly for either role so I'll be making use of path-based authorization anyway. Finally, would using something other than path-based authorization be contrary to typical asp.net design practices, at least in the context of using the asp.net configuration system?

    Read the article

  • Java 8 Stream, getting head and tail

    - by lyomi
    Java 8 introduced a Stream class that resembles Scala's Stream, a powerful lazy construct using which it is possible to do something like this very concisely: def from(n: Int): Stream[Int] = n #:: from(n+1) def sieve(s: Stream[Int]): Stream[Int] = { s.head #:: sieve(s.tail filter (_ % s.head != 0)) } val primes = sieve(from(2)) primes takeWhile(_ < 1000) print // prints all primes less than 1000 I wondered if it is possible to do this in Java 8, so I wrote something like this: IntStream from(int n) { return IntStream.iterate(n, m -> m + 1); } IntStream sieve(IntStream s) { int head = s.findFirst().getAsInt(); return IntStream.concat(IntStream.of(head), sieve(s.skip(1).filter(n -> n % head != 0))); } IntStream primes = sieve(from(2)); PrimitiveIterator.OfInt it = primes.iterator(); for (int prime = it.nextInt(); prime < 1000; prime = it.nextInt()) { System.out.println(prime); } Fairly simple, but it produces java.lang.IllegalStateException: stream has already been operated upon or closed because both findFirst() and skip() is a terminal operation on Stream which can be done only once. I don't really have to use up the stream twice since all I need is the first number in the stream and the rest as another stream, i.e. equivalent of Scala's Stream.head and Stream.tail. Is there a method in Java 8 Stream that I can achieve this? Thanks.

    Read the article

  • Fastest way to become a MySQL expert?

    - by Kerry
    I have been using MySQL for years, mainly on smaller projects until the last year or so. I'm not sure if it's the nature of the language or my lack of real tutorials that gives me the feeling of being unsure if what I'm writing is the proper way for optimization purposes and scaling purposes. While self-taught in PHP I'm very sure of myself and the code I write, easily can compare it to others and so on. With MySQL, I'm not sure whether (and in what cases) an INNER JOIN or LEFT JOIN should be used, nor am I aware of the large amount of functionality that it has. While I've written code for databases that handled tens of millions of records, I don't know if it's optimum. I often find that a small tweak will make a query take less than 1/10 of the original time... but how do I know that my current query isn't also slow? I would like to become completely confident in this field in the ability to optimize databases and be scalable. Use is not a problem -- I use it on a daily basis in a number of different ways. So, the question is, what's the path? Reading a book? Website/tutorials? Recommendations?

    Read the article

  • Best practices for encrypting continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (no waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • Deserialize xml which uses attribute name/value pairs

    - by Bodyloss
    My application receives a constant stream of xml files which are more or less a direct copy of the database record <record type="update"> <field name="id">987654321</field> <field name="user_id">4321</field> <field name="updated">2011-11-24 13:43:23</field> </record> And I need to deserialize this into a class which provides nullable property's for all columns class Record { public long? Id { get; set; } public long? UserId { get; set; } public DateTime? Updated { get; set; } } I just cant seem to work out a method of doing this without having to parse the xml file manually and switch on the field's name attribute to store the values. Is their a way this can be achieved quickly using an XmlSerializer? And if not is their a more efficient way of parsing it manually? Regards and thanks My main problem is that the attribute name needs to have its value set to a property name and its value as the contents of a <field>..</field> element

    Read the article

  • Java Executor: Small tasks or big ones?

    - by Arash Shahkar
    Consider one big task which could be broken into hundreds of small, independently-runnable tasks. To be more specific, each small task is to send a light network request and decide upon the answer received from the server. These small tasks are not expected to take longer than a second, and involve a few servers in total. I have in mind two approaches to implement this using the Executor framework, and I want to know which one's better and why. Create a few, say 5 to 10 tasks each involving doing a bunch of send and receives. Create a single task (Callable or Runnable) for each send & receive and schedule all of them (hundreds) to be run by the executor. I'm sorry if my question shows that I'm lazy to test these and see for myself what's better (at least performance-wise). My question, while looking after an answer to this specific case, has a more general aspect. In situations like these when you want to use an executor to do all the scheduling and other stuff, is it better to create lots of small tasks or to group those into a less number of bigger tasks?

    Read the article

  • question about counting sort

    - by davit-datuashvili
    hi i have write following code which prints elements in sorted order only one big problem is that it use two additional array here is my code public class occurance{ public static final int n=5; public static void main(String[]args){ // n is maximum possible value what it should be in array suppose n=5 then array may be int a[]=new int[]{3,4,4,2,1,3,5};// as u see all elements are less or equal to n //create array a.length*n int b[]=new int[a.length*n]; int c[]=new int[b.length]; for (int i=0;i<b.length;i++){ b[i]=0; c[i]=0; } for (int i=0;i<a.length;i++){ if (b[a[i]]==1){ c[a[i]]=1; } else{ b[a[i]]=1; } } for (int i=0;i<b.length;i++){ if (b[i]==1) { System.out.println(i); } if (c[i]==1){ System.out.println(i); } } } } // 1 2 3 3 4 4 5 1.i have two question what is complexity of this algorithm?i mean running time 2. how put this elements into other array with sorted order? thanks

    Read the article

  • Releasing Autoreleasepool crashes on iOS 4.0 (and only on 4.0)

    - by samsam
    Hi there. I'm wondering what could cause this. I have several methods in my code that i call using performSelectorInBackground. Within each of these methods i have an Autoreleasepool that is being alloced/initialized at the beginning and released at the end of the method. this perfectly works on iOS 3.1.3 / 3.2 / 4.2 / 4.2.1 but it fataly crashes on iOS 4.0 with a EXC_BAD_ACCESS Exception that happens after calling [myPool release]. After I noticed this strange behaviour I was thinking about rewriting portions of my code and to make my app "less parallel" in case that the client os is 4.0. After I did that, the next point where the app crashed was within the ReachabilityCallback-Method from Apples Reachability "Framework". well, now I'm not quite sure what to do. The things i do within my threaded methods is pretty simple xml parsing (no cocoa calls or stuff that would affect the UI). After each method finishes it posts a notification which the coordinating-thread listens to and once all the parallelized methods have finished, the coordinating thread calls viewcontrollers etc... I have absolutely no clue what could cause this weird behaviour. Especially because Apples Code fails as well. any help is greatly appreciated! thanks, sam

    Read the article

  • Wasteful Ajax Page Loading

    - by Matt Dawdy
    I've started a new job, and the portion of the project I'm working has a very odd structure. Every pages is a .Net aspx page, and it loads just fine, but nothing is really done at load time. Everything is really loaded from a jquery document.onready handler. What is even more...interesting...is that the onready handler calls some ajax calls that drop entire .aspx pages into divs on the page, but first it strips out several parts of the the returned page. This is the "magic" script the previous programmer ran on all the returned html from his ajax calls: function CleanupResponseText(responseText, uniqueName) { responseText = responseText.replace("theForm.submit();", "SubmitSubForm(theForm, $(theForm).parent());"); responseText = responseText.replace(new RegExp("theForm", "g"), uniqueName); responseText = responseText.replace(new RegExp("doPostBack", "g"), "doPostBack" + uniqueName); return responseText; } He then intercepts any kind of form postback and runs his own form submission function: function SubmitSubForm(form, container) { //ShowLoading(container); $(form).ajaxSubmit( { url: $(form).attr("action"), success: function(responseText) { $(container).html(CleanupResponseText(responseText, form.id)); $("form", container).css("margin-top", "0").css("padding-top", "0"); //HideLoading(container); } } ); } Am I way offbase in thinking that this is less than optimal? I mean, how does a browser take out the html and head and other tags that don't have anything to do with what you are really trying to drop into that div? Also, he's returning things like asp:gridview controls, and the associate viewstate, which can be quite large if his dataset is big. Has anyone seen this before?

    Read the article

  • What is the fastest way to find duplicates in multiple BIG txt files?

    - by user2950750
    I am really in deep water here and I need a lifeline. I have 10 txt files. Each file has up to 100.000.000 lines of data. Each line is simply a number representing something else. Numbers go up to 9 digits. I need to (somehow) scan these 10 files and find the numbers that appear in all 10 files. And here comes the tricky part. I have to do it in less than 2 seconds. I am not a developer, so I need an explanation for dummies. I have done enough research to learn that hash tables and map reduce might be something that I can make use of. But can it really be used to make it this fast, or do I need more advanced solutions? I have also been thinking about cutting up the files into smaller files. To that 1 file with 100.000.000 lines is transformed into 100 files with 1.000.000 lines. But I do not know what is best: 10 files with 100 million lines or 1000 files with 1 million lines? When I try to open the 100 million line file, it takes forever. So I think, maybe, it is just too big to be used. But I don't know if you can write code that will scan it without opening. Speed is the most important factor in this, and I need to know if it can be done as fast as I need it, or if I have to store my data in another way, for example, in a database like mysql or something. Thank you in advance to anybody that can give some good feedback.

    Read the article

  • What Test Environment Setup do Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Host WCF in MVC2 Site

    - by Basiclife
    Hi, We've got a very large, complex MVC2 website. We want to add an API for some internal tools and decided to use WCF. Ideally, we want MVC itself to host the WCF service. Reasons include: Although there's multiple tiers to the application, some functionality we'd like in the API requires the website itself (e.g. formatting emails). We use TFS to auto-build (continuous integration) and deploy - The less we need to modify the build and release mechanism the better We use the Unity container and Inversion of Control throughout the application. Being part of the Website would allow us to re-use configuration classes and other helper methods. I've written a custom ServiceBehavior which in turn has a custom InstanceProvider - This allows me to instantiate and configure a container which is then used to service all requests for class instances from WCF. So my question is; Is it possible to host a WCF service from within MVC itself? I've only had experience in Services / Standard Asp.Net websites before and didn't realise MVC2 might be different until I actually tried to wire it into the config and nothing happened. After some googling, there don't seem to be many references to doing this - so thought I'd ask here.

    Read the article

  • IIS doesn't send two responses to the same client at the same time (only for ASP)

    - by dr. evil
    I've got 2 ASP pages. I do a request to the first page from Firefox (which takes 30 seconds to process on server-side), and during the execution of 30 seconds I do another request from Firefox to the second page (takes less than 1 second in server-side), but it does come after 31 second. Because it waits first requests to finish. When I request to the first page from Firefox and then request the second page from IE it's just instant. So basically ASP - IIS 6 somehow limiting every client to one request (long processing request) at a time. I need to get around this problem in my .NET client application. This is tested in 3 different systems. If you want to test you can try the ASP scripts at the end. This behaviour is same in a long SQL execution or just in a time consuming ASP operation. Note: It's not about HTTP Keep Alive It's not about persistent connection limit (we tried to increase this in firefox and in .NET with Net.ServicePointManager.DefaultConnectionLimit) It's not about User Agent This doesn't happen in ASP.NET so I assume it's something to the with ASP.dll I'm trying to solve this on the client not the server. I don't have direct control over the server it's a 3rd party solution. Is there any way to get around this? Sample ASP Code: First ASP: <% Set cnn = Server.CreateObject("Adodb.Connection") cnn.Open "Provider=sqloledb;Data Source=.;Initial Catalog=master;User Id=sa;Password=;" cnn.Execute("WAITFOR DELAY '0:0:30'") cnn.Close %> Second ASP: <% Response.Write "bla bla" %>

    Read the article

  • Apache console accesses network drives, service does not?

    - by danspants
    I have an apache 2.2 server running Django. We have a network drive T: which we need constant access to within our Django app. When running Apache as a service, we cannot access this drive, as far as any django code is concerned the drive does not exist. If I add... <Directory "t:/"> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> to the httpd.conf file the service no longer runs, but I can start apache as a console and it works fine, Django can find the network drive and all is well. Why is there a difference between the console and the service? Should there be a difference? I have the service using my own log on so in theory it should have the same access as I do. I'm keen to keep it running as a service as it's far less obtrusive when I'm working on the server (unless there's a way to hide the console?). Any help would be most appreciated.

    Read the article

  • Can I architect a web app so it can be deployed to either the cloud or a dedicated server / VPS ? Ho

    - by CAD bloke
    Is there are an architecture versatile enough that it may be deployed to either a cloud server or to a dedicated (or VPS) server with minimal change? Obviously there would be config changes but I'd rather leave the rest of the app consistent, keeping one maintainable codebase. The app would be ASP.NET &/or ASP.MVC. My dev environment is VS 2010. The cloud may, or may not be, Azure. Dedicated or VPS would be Win Server 2008. Probably. It is not a public-facing web site. The web app I have in mind would be a separate deployment for each client. Some clients would be small-scale, some will prefer the app to run on a local intranet rather than on the web. Other clients may prefer the cloud approach for a black-box solution. The app may run for a few hours or it may run indefinitely, it depends on the client and the project. Other than deployment scenarios the apps would be more or less identical. As you may see from the tags, I'm assuming a message-based architecture is probably the most versatile but I'm also used to being wrong about this stuff. All suggestions and pointers welcome regarding general architectures and also specific solutions.

    Read the article

  • Getting a Specified Cast is not valid while importing data from Excel using Linq to SQL

    - by niceoneishere
    This is my second post. After learning from my first post how fantastic is to use Linq to SQL, I wanted to try to import data from a Excel sheet into my SQL database. First My Excel Sheet: it contains 4 columns namely ItemNo ItemSize ItemPrice UnitsSold I have a created a database table with the following fields table name ProductsSold Id int not null identity --with auto increment set to true ItemNo VarChar(10) not null ItemSize VarChar(4) not null ItemPrice Decimal(18,2) not null UnitsSold int not null Now I created a dal.dbml file based on my database and I am trying to import the data from excel sheet to db table using the code below. Everything is happening on click of a button. private const string forecast_query = "SELECT ItemNo, ItemSize, ItemPrice, UnitsSold FROM [Sheet1$]"; protected void btnUpload_Click(object sender, EventArgs e) { var importer = new LinqSqlModelImporter(); if (fileUpload.HasFile) { var uploadFile = new UploadFile(fileUpload.FileName); try { fileUpload.SaveAs(uploadFile.SavePath); if(File.Exists(uploadFile.SavePath)) { importer.SourceConnectionString = uploadFile.GetOleDbConnectionString(); importer.Import(forecast_query); gvDisplay.DataBind(); pnDisplay.Visible = true; } } catch (Exception ex) { Response.Write(ex.Source.ToString()); lblInfo.Text = ex.Message; } finally { uploadFile.DeleteFileNoException(); } } } // Now here is the code for LinqSqlModelImporter public class LinqSqlModelImporter : SqlImporter { public override void Import(string query) { // importing data using oledb command and inserting into db using LINQ to SQL using (var context = new WSDALDataContext()) { using (var myConnection = new OleDbConnection(base.SourceConnectionString)) using (var myCommand = new OleDbCommand(query, myConnection)) { myConnection.Open(); var myReader = myCommand.ExecuteReader(); while (myReader.Read()) { context.ProductsSolds.InsertOnSubmit(new ProductsSold() { ItemNo = myReader.GetString(0), ItemSize = myReader.GetString(1), ItemPrice = myReader.GetDecimal(2), UnitsSold = myReader.GetInt32(3) }); } } context.SubmitChanges(); } } } can someone please tell me where am I making the error or if I am missing something, but this is driving me nuts. When I debugged I am getting this error when casting from a number the value must be a number less than infinity I really appreciate it

    Read the article

  • Plotting a graph with GD

    - by Nayena
    Here it goes. I have been thinking about this for a long time, and havent really been able to put up a proper way to do it yet. I havent implemented anything yet, as im still designing the thing. The idea is that i crawl a website for internal links, i got this settled, its easy, but after the crawling, i end up with an array with lots of links, and how many times those particular link appears on the site that i crawled (and how they're connected). With this huge array, i want to draw a graph somehow. Assuming i can handle the data correctly, the real question here is how i can draw this in a image by the use of the GD library. I figured if theres less than 12 elements, i can align them up on a unit circle spacing them up as a circle and then connecting them accordingly, so anything up to 12 elements shouldn't be a problem, but if theres more than 12, it could be awesome getting them lined up like this Or well, thats just a rough drawing, but i guess its just to prove a point. So i'm here looking for guidance or tips towards getting the math down to getting the stuff lined up in a good way. I have previously made bar-graphs, so i have little experience doing math with GD. If possible, id prefer not using some plotter-library - in the end, it gives me a better understanding on how things are supposed to be.

    Read the article

  • How I May Have Taken A Wrong Path in Programming

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • How to speed-up python nested loop?

    - by erich
    I'm performing a nested loop in python that is included below. This serves as a basic way of searching through existing financial time series and looking for periods in the time series that match certain characteristics. In this case there are two separate, equally sized, arrays representing the 'close' (i.e. the price of an asset) and the 'volume' (i.e. the amount of the asset that was exchanged over the period). For each period in time I would like to look forward at all future intervals with lengths between 1 and INTERVAL_LENGTH and see if any of those intervals have characteristics that match my search (in this case the ratio of the close values is greater than 1.0001 and less than 1.5 and the summed volume is greater than 100). My understanding is that one of the major reasons for the speedup when using NumPy is that the interpreter doesn't need to type-check the operands each time it evaluates something so long as you're operating on the array as a whole (e.g. numpy_array * 2), but obviously the code below is not taking advantage of that. Is there a way to replace the internal loop with some kind of window function which could result in a speedup, or any other way using numpy/scipy to speed this up substantially in native python? Alternatively, is there a better way to do this in general (e.g. will it be much faster to write this loop in C++ and use weave)? ARRAY_LENGTH = 500000 INTERVAL_LENGTH = 15 close = np.array( xrange(ARRAY_LENGTH) ) volume = np.array( xrange(ARRAY_LENGTH) ) close, volume = close.astype('float64'), volume.astype('float64') results = [] for i in xrange(len(close) - INTERVAL_LENGTH): for j in xrange(i+1, i+INTERVAL_LENGTH): ret = close[j] / close[i] vol = sum( volume[i+1:j+1] ) if ret > 1.0001 and ret < 1.5 and vol > 100: results.append( [i, j, ret, vol] ) print results

    Read the article

  • Coordinating typedefs and structs in std::multiset (C++)

    - by Sarah
    I'm not a professional programmer, so please don't hesitate to state the obvious. My goal is to use a std::multiset container (typedef EventMultiSet) called currentEvents to organize a list of structs, of type Event, and to have members of class Host occasionally add new Event structs to currentEvents. The structs are supposed to be sorted by one of their members, time. I am not sure how much of what I am trying to do is legal; the g++ compiler reports (in "Host.h") "error: 'EventMultiSet' has not been declared." Here's what I'm doing: // Event.h struct Event { public: bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } double time; int eventID; int hostID; }; // Host.h ... void calcLifeHist( double, EventMultiSet * ); // produces compiler error ... void addEvent( double, int, int, EventMultiSet * ); // produces compiler error // Host.cpp #include "Event.h" ... // main.cpp #include "Event.h" ... typedef std::multiset< Event, std::less< Event > > EventMultiSet; EventMultiSet currentEvents; EventMultiSet * cePtr = &currentEvents; ... Major questions Where should I include the EventMultiSet typedef? Are my EventMultiSet pointers obviously problematic? Is the compare function within my Event struct (in theory) okay? Thank you very much in advance.

    Read the article

  • How do I grep for entire, possibly wrapped, lines of code?

    - by NXT
    When searching code for strings, I constantly run into the problem that I get meaningless, context-less results. For example, if a function call is split across 3 lines, and I search for the name of a parameter, I get the parameter on a line by itself and not the name of the function. For example, in a file containing ... someFunctionCall ("test", MY_CONSTANT, (some *really) - long / expression); grepping for MY_CONSTANT would return a line that looked like this: MY_CONSTANT, Likewise, in a comment block: ///////////////////////////////////////// // FIXMESOON, do..while is the wrong choice here, because // it makes the wrong thing happen ///////////////////////////////////////// Grepping for FIXMESOON gives the very frustrating answer: // FIXMESOON, do..while is the wrong choice here, because When there are thousands of hits, single line results are a little meaningless. What I would like to do is have grep be aware of the start and stop points of source code lines, something as simple as having it consider ";" as the line separator would be a good start. Bonus points if you can make it return the entire comment block if the hit is inside a comment. I know you can't do this with grep alone. I also am aware of the option to have grep return a certain number of lines of content. Any suggestions on how to accomplish under Linux? FYI my preferred languages are C and Perl. I'm sure I could write something, but I know that somebody must have already done this. Thanks!

    Read the article

  • Creating C++ client app for some abstract windows server - how to manage TCP connection to server speed?

    - by Kabumbus
    So we have some server with some address port and ip. we are developing that server so we can implement on it what ever we need for help. What are standard/best practices for data transfer speed management between C++ windows client app and server (C++)? My main point is in how to get how much data can be uploaded/downloaded from/to client via his low speed network to my relatively super fast server. (I need it for set up of his live stream Audio/Video bit rate) My try on explaining number 3. We do not care how fast is our server. It is always faster than needed. We care about client tyring to stream out to our server his media. he streams encoded (via ffmpeg) live video data to our server. But he has say ADSL with 500kb/s of outgoing traffic. Also he uses some ICQ or what so ever so he has less than 500 kb/s per second. And he wants to stream live video! So we need to set up our ffmpeg to encode video with respect to the bit rate user can provide. We develop server side and client side. We need a way of finding out how much user can upload per second currently (so value can change dynamically over time)

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >