Search Results

Search found 2329 results on 94 pages for 'minute'.

Page 75/94 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Advice on designing and building distributed application to track vehicles

    - by dario-g
    I'm working on application for tracking vehicles. There will be about 10k or more vehicles. Each will be sending ~250bytes in each minute. Data contains gps location and everything from CAN Bus (every data that we can read from vehicle computer and dashboard). Data are sent by GSM/GPRS (using UDP protocol). Estimated rows with this data per day is ~2000k. I see there 3 main blocks. 1. Multithreaded Socket Server (MSS) - I have it. MSS stores received data to the queue (using NServiceBus). 2. Rule Processor Server (RPS) - this is core of this system. This block is responsible for parsing received data, storing in the database, processing rules, sending messages to Notifier Server (this will be sending e-mails/sms texts). Rule example. As I said between received bytes there will be information about current speed. When speed will be above 120 then: show alert in web application for specified users, send e-mail, send sms text. (There can be more than one instance of RPS). 3. Web application - allows reporting and defining rules by users, monitoring alerts, etc. I'm looking for advice how to design communication between RPS and Web application. Some questions: - Should Web application and RPS have separated databases or one central database will be enough? I have one domain model in web application. If there will be one central database then can I use the same model (objects) on RPS? So, how to send changed rules to RPS? I try to decouple this blocks as much as possible. I'm planning to create different instance of application for each client (each client will have separated database). One client will be have 10k vehicles, others only 100 vehicles.

    Read the article

  • How can I work around SQL Server - Inline Table Value Function execution plan variation based on par

    - by Ovidiu Pacurar
    Here is the situation: I have a table value function with a datetime parameter ,lest's say tdf(p_date) , that filters about two million rows selecting those with column date smaller than p_date and computes some aggregate values on other columns. It works great but if p_date is a custom scalar value function (returning the end of day in my case) the execution plan is altered an the query goes from 1 sec to 1 minute execution time. A proof of concept table - 1K products, 2M rows: CREATE TABLE [dbo].[POC]( [Date] [datetime] NOT NULL, [idProduct] [int] NOT NULL, [Quantity] [int] NOT NULL ) ON [PRIMARY] The inline table value function: CREATE FUNCTION tdf (@p_date datetime) RETURNS TABLE AS RETURN ( SELECT idProduct, SUM(Quantity) AS TotalQuantity, max(Date) as LastDate FROM POC WHERE (Date < @p_date) GROUP BY idProduct ) The scalar value function: CREATE FUNCTION [dbo].[EndOfDay] (@date datetime) RETURNS datetime AS BEGIN DECLARE @res datetime SET @res=dateadd(second, -1, dateadd(day, 1, dateadd(ms, -datepart(ms, @date), dateadd(ss, -datepart(ss, @date), dateadd(mi,- datepart(mi,@date), dateadd(hh, -datepart(hh, @date), @date)))))) RETURN @res END Query 1 - Working great SELECT * FROM [dbo].[tdf] (getdate()) The end of execution plan: Stream Aggregate Cost 13% <--- Clustered Index Scan Cost 86% Query 2 - Not so great SELECT * FROM [dbo].[tdf] (dbo.EndOfDay(getdate())) The end of execution plan: Stream Aggregate Cost 4% <--- Filter Cost 12% <--- Clustered Index Scan Cost 86%

    Read the article

  • I write barely functional scripts that tend to not be resuable and make the baby jesus cry. Please h

    - by maxxpower
    I received a request to add around 100 users to a linux box the users are already in ldap so I can't just use newusers and point it at a text file. Another admin is taking care of the ldap piece so all I have to do is create all the home directories and chown them to the correct user once he adds the users to the box. creating the directories isn't a problem, but I'd like a more elegant script for chowning them to the correct user. what I have currently basically looks like chown -R testuser1 testgroup1 /home/tetsuser1; chown -R testuser2 testgroup2 /home/testgroup2; chown -R testsuser3 testgroup1 /home/testuser3 bascially I took the request that the user name and group name popped it into excel added a column of "chown -R" to the front, then added a column of "/", copied and pasted the username column after it and then added a column of ";" and dragged it down to the second to last row. Popped it into notepad ran some quick find and replaces and in less than a minute I have a completed request and a sad empty feeling. I know this was a really ghetto method and I'm trying to get away from using excel to avoid learning new scripting techniques so here's my real question. tl;dr I made 100 home directories and chowned them to the correct users, but it was ugly. Actual question below. You have a file named idlist that looks like this (only with say 1000 users and real usernames and groups) testuser1 testgroup1 testuser2 testgroup2 testuser3 testgroup1 write a script that creates home directories for all the users and chowns the created directories to the correct user and group. To make the directories I used the following(feel free to flame/correct me on this as well. ) var= 'cut -f1 -d" " idlist' (I used backticks not apostrophes around the cut command) mkdir $var

    Read the article

  • How to recursive rake? -- or suitable alternatives

    - by TerryP
    I want my projects top level Rakefile to build things using rakefiles deeper in the tree; i.e. the top level rakefile says how to build the project (big picture) and the lower level ones build a specific module (local picture). There is of course a shared set of configuration for the minute details of doing that whenever it can be shared between tasks: so it is mostly about keeping the descriptions of what needs building, as close to the sources being built. E.g. /Source/Module/code.foo and cie should be built using the instructions in /Source/Module/Rakefile; and /Rakefile understands the dependencies between modules. I don't care if it uses multiple rake processes (ala recursive make), or just creates separate build environments. Either way it should be self-containable enough to be processed by a queue: so that non-dependent modules could be built simultaneously. The problem is, how the heck do you actually do something like that with Rake!? I haven't been able to find anything meaningful on the Internet, nor in the documentation. I tried creating a new Rake::Application object and setting it up, but whatever methods I try invoking, only exceptions or "Don't know how to build task ':default'" errors get thrown. (Yes, all rakefiles have a :default). Obviously one could just execute 'rake' in a sub directory for a :modulename task, but that would ditch the options given to the top level; e.g. think of $(MAKE) and $(MAKEFLAGS). Anyone have a clue on how to properly do something like a recursive rake?

    Read the article

  • rpy2: Converting a data.frame to a numpy array

    - by Mike Dewar
    I have a data.frame in R. It contains a lot of data : gene expression levels from many (125) arrays. I'd like the data in Python, due mostly to my incompetence in R and the fact that this was supposed to be a 30 minute job. I would like the following code to work. To understand this code, know that the variable path contains the full path to my data set which, when loaded, gives me a variable called immgen. Know that immgen is an object (a Bioconductor ExpressionSet object) and that exprs(immgen) returns a data frame with 125 columns (experiments) and tens of thousands of rows (named genes). robjects.r("load('%s')"%path) # loads immgen e = robjects.r['data.frame']("exprs(immgen)") expression_data = np.array(e) This code runs, but expression_data is simply array([[1]]). I'm pretty sure that e doesn't represent the data frame generated by exprs() due to things like: In [40]: e._get_ncol() Out[40]: 1 In [41]: e._get_nrow() Out[41]: 1 But then again who knows? Even if e did represent my data.frame, that it doesn't convert straight to an array would be fair enough - a data frame has more in it than an array (rownames and colnames) and so maybe life shouldn't be this easy. However I still can't work out how to perform the conversion. The documentation is a bit too terse for me, though my limited understanding of the headings in the docs implies that this should be possible. Anyone any thoughts?

    Read the article

  • Reasons for & against a Database

    - by dbemerlin
    Hi, i had a discussion with a coworker about the architecture of a program i'm writing and i'd like some more opinions. The Situation: The Program should update at near-realtime (+/- 1 Minute). It involves the movement of objects on a coordinate system. There are some events that occur at regular intervals (i.e. creation of the objects). Movements can change at any time through user input. My solution was: Build a server that runs continously and stores the data internally. The server dumps a state-of-the-program at regular intervals to protect against powerfailures and/or crashes. He argued that the program requires a Database and i should use cronjobs to update the data. I can store movement information by storing startpoint, endpoint and speed and update the position in the cronjob (and calculate collisions with other objects there) by calculating direction and speed. His reasons: Requires more CPU & Memory because it runs constantly. Powerfailures/Crashes might destroy data. Databases are faster. My reasons against this are mostly: Not very precise as events can only occur at full minutes (wouldn't be that bad though). Requires (possibly costly) transformation of data on every run from relational data to objects. RDBMS are a general solution for a specialized problem so a specialized solution should be more efficient. Powerfailures (or other crashes) can leave the Data in an undefined state with only partially updated data unless (possibly costly) precautions (like transactions) are taken. What are your opinions about that? Which arguments can you add for any side?

    Read the article

  • Collecting high-volume video viewing data

    - by DanK
    I want to add tracking to our Flash-based media player so that we can provide analytics that show what sections of videos are being watched (at the moment, we just register a view when a video starts playing) For example, if a viewer watches the first 30 seconds of a video and then clicks away to something else, we want the data to reflect that. Likewise, if someone watches the first 10 seconds, then scrubs the timeline to the last minute of the video and watches that, we want to register viewing on the parts watched and not the middle section. My first thought was to collect up the viewing data in the player and send it all to the server at the end of a viewing session. Unfortunately, Flash does not seem to have an event that you can hook into when a viewer clicks away from the page the movie is on (probably a good thing - it would be open to abuse) So, it looks like we're going to have to make regular requests to the server as the video is playing. This is obviously going to lead to a high volume of requests when there are large numbers of simultaneous viewers. The simple approach of dumping all these 'heartbeat' events from clients to a database feels like it will quickly become unmanageable so I'm wondering whether I should be taking an approach where viewing sessions are cached in memory and flushed to database when they become inactive (based on a timeout). That way, the data could be stored as time spans rather than individual heartbeats. So, to the question - what is the best way to approach dealing with this kind of high-volume viewing data? Are there any good existing architectures/patterns? Thanks, Dan.

    Read the article

  • How can I embed images in an ASP.NET Generated Word File

    - by Nikos Steiakakis
    Hi everyone! I have a quite common problem, as I saw in the various user groups but could not find a suitable answer. What I want to do is generate an ASP.NET page in my website which will have the option of being exported into Microsoft Word .doc format. The method I have used is this: Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=Test.doc"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/msword"; StringWriter sw = new StringWriter(); HtmlTextWriter htmlWrite = new HtmlTextWriter(sw); Page.RenderControl(htmlWrite); Response.Write(sw.ToString()); Response.End(); However this eventhough it generates a word doc, the images are note embedded in the document, rather they are placed as links. I have looked for a way to do this, but have not found something that actually worked. I would appreciate any help I can get, since this as "last minute" requirement (talk about typical) Thanks

    Read the article

  • Make PasswordRecovery control work with locked out users ?

    - by Moe Sisko
    Example scenario in an ASP.NET application using SQL Server membership provider : 1) a user can't remember their exact password, and tries many times in a short space of time to login with an invalid password (say 5 times in a 10 minute window). This locks out the user (i.e. sets the IsLockedOut flag of the aspnet_Membership table to 1). 2) user goes to the "forgot my password" screen to try to get a new password emailed to them. This screen uses the PasswordRecovery control. User enters their correct user id, but then cannot go further in the password recovery process, since the IsLockedOut flag is 1. (They don't even get to see their security question). 3) The user would then have to phone tech support to get themselves unlocked etc. To reduce the burden on support staff, we are trying to eliminate step 3) if possible, by making the PasswordRecovery control (if possible), work with locked out users. i.e. when they enter their login ID, the security question comes up, and IF they enter the correct answer, the system will unlock the user, then send the new email to them. I'm wondering if it is possible to tweak the PasswordRecovery control to do this. Or maybe I'm approaching this the wrong way ?

    Read the article

  • Run a PHP script every second using CLI

    - by Saif Bechan
    Hello, I have a dedicated server running Cent OS with a Parallel PLESK panel. I need to run a php script every second, that updates my database. These is no alternative way timewise, i have checked every method, it needs to be updated every second. I can find my script using the url: http://www.mysite.com/phpfile.php?key=123, and this has to be executed every second. Does anyone have any knowledge at all on doing this, i can not seem to find the answer. I heard about doing it with CLI and putty, but i have no knowledge of this at all. Or can this be done using the PLESK Panel? And can the file be executed locally every second. Like \phpfile.php If someone helps me on answering these question i would really appreciate it. Regards EDIT It has been a few months since i added this question. I ended up using the following code: #!/user/bin/php $start = microtime(true); set_time_limit(60); for (i = 0; i < 59; ++$i) { doMyThings(); time_sleep_until($start + $i + 1); } Thank you for this code guys! My cronjob is set to every minute. I have been running this for some time now in a test environment, and this works out great. It works really supperfast, and i see no increase in CPU nor Memory usage.

    Read the article

  • [Python] name 'OptionGroup' is not defined

    - by Cawas
    Ok, so I made this rookie mistake below, but in my defense I was led to it thanks to how the help about this subject is on python docs, which states how to use optparse. It is actually an error under the gigantic tutorial section. In contrast and to my offense, I may be one of the very few stupid people who can't read very well and pay close attention on what I do. But since this took me so long to discover, I wanted to "document" it here: Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) >>> from optparse import OptionParser >>> outputGroup = OptionGroup(parser, 'Output handling') Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'OptionGroup' is not defined This is strictly done with examples found on the docs, and you can't find anything about it anywhere, be it that long long docs page, google or stackoverflow. Plus, reading optparse.py shows OptionGroup is there, so that adds to the confusion. I bet it will take less than 1 minute for someone to spot my error. For that I'll only add proper tags and / or modify the title later on. :)

    Read the article

  • How can I optimize this or is there a better way to do it?(HTML Syntax Highlighter)

    - by Tanner
    Hello every one, I have made a HTML syntax highlighter in C# and it works great, but there's one problem. First off It runs pretty fast because it syntax highlights line by line, but when I paste more than one line of code or open a file I have to highlight the whole file which can take up to a minute for a file with only 150 lines of code. I tried just highlighting visible lines in the richtextbox but then when I try to scroll I can't it to highlight the new visible text. Here is my code:(note: I need to use regex so I can get the stuff in between < & characters) Highlight Whole File: public void AllMarkup() { int selectionstart = richTextBox1.SelectionStart; Regex rex = new Regex("<html>|</html>|<head.*?>|</head>|<body.*?>|</body>|<div.*?>|</div>|<span.*?>|</span>|<title.*?>|</title>|<style.*?>|</style>|<script.*?>|</script>|<link.*?/>|<meta.*?/>|<base.*?/>|<center.*?>|</center>|<a.*?>|</a>"); foreach (Match m in rex.Matches(richTextBox1.Text)) { richTextBox1.Select(m.Index, m.Value.Length); richTextBox1.SelectionColor = Color.Blue; richTextBox1.Select(selectionstart, -1); richTextBox1.SelectionColor = Color.Black; } richTextBox1.SelectionStart = selectionstart; } private void pasteToolStripMenuItem_Click(object sender, EventArgs e) { try { LockWindowUpdate(richTextBox1.Handle);//Stops text from flashing flashing richTextBox1.Paste(); AllMarkup(); }finally { LockWindowUpdate(IntPtr.Zero); } } I want to know if there's a better way to highlight this and make it faster or if someone can help me make it highlight only the visible text. Please help. :) Thanks, Tanner.

    Read the article

  • Perl Parallel::ForkManager wait_all_children() takes excessively long time

    - by zhang18
    I have a script that uses Parallel::ForkManager. However, the wait_all_children() process takes incredibly long time even after all child-processes are completed. The way I know is by printing out some timestamps (see below). Does anyone have any idea what might be causing this (I have 16 CPU cores on my machine)? my $pm = Parallel::ForkManager->new(16) for my $i (1..16) { $pm->start($i) and next; ... do something within the child-process ... print (scalar localtime), " Process $i completed.\n"; $pm->finish(); } print (scalar localtime), " Waiting for some child process to finish.\n"; $pm->wait_all_children(); print (scalar localtime), " All processes finished.\n"; Clearly, I'll get the Waiting for some child process to finish message first, with a timestamp of, say, 7:08:35. Then I'll get a list of Process i completed messages, with the last one at 7:10:30. However, I do not receive the message All Processes finished until 7:16:33(!). Why is that 6-minute delay between 7:10:30 and 7:16:33? Thx!

    Read the article

  • How To Find Reasons of Why Site Goes Online/Offline

    - by HollerTrain
    Seems today a website I manage has been going online and offline throughout the entire day. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I use a program that pings the server every minute and when the server is not responding me it emails me, so I can know exactly when the site is online and offline. The site between 8pm to 12pm 12.28, and around the 1a hour early morning 12.29 (New York City timezone, and all times below are in same timezone). At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Help me avoid a resonance cascade.

    - by SLC
    Hi, my name is Dr. Kleiner, and I'm a senior scientist working at the Black Mesa Research Facility. I've just finished compiling my code to analyse a large unknown sample we've come across. Unfortunately, there were 19 build errors and 42 warnings, but I've been told the experiment must go ahead. Time is critical, we've already got one of our newest employees who is suiting up as I type this to complete the experiment. I really need some help. Can you think of anything last minute to stop a potential resonance cascade? Someone has hidden my glasses again... Anyway, I hope I never see a resonance cascade, and definitely don't want to create one. It's my lunch break in 5 minutes, and my casserole is already in the microwave ready. Please, give me some advice. If it helps I wrote all of the code to analyse the sample and activate the sampler machine in BASIC. Edit: Oh god! They're everywhere! Send assista

    Read the article

  • Strange Problem with Webservice and IIS

    - by Rene
    Hello there, I have a Problem which confuses me a little bit, resp. where I don't have any Idea about what it could be. The System I'm using is Windows Vista, IIS 7.0, VS2008, Windows Software Factory, Entity Framework, WCF. The Binding for all Webservices is wshttpbinding. I'm using a Webservice hosted in IIS. This Webservice uses/calls another Webservice (also installed in the IIS). If I use a client calling the first Webservice (which calls the second Webservice) it works fine for about 4-10 Times. And then (it is repeatable to get this Problem, but sometimes it happens after 4, sometimes after 10 Time, but it always will happen), the Service and the IIS gets stuck. Stuck means, that this Webservice isn't callable anymore and generates an timeout after 1 minute. Even increasing Timeout doesn't change anything. If i try to restart the IIS I get an timeout error. So the IIS is also "stuck" (it is not really stuck, but I can't restart it). Only if I kill the w3wp.exe IIS is restartable and the Webservice will work again (until i again call this service several times). The logfiles (i'm no expert in things like logging or where to find/enable such logs, so to say : i'm a newbie) like http-logging, Event Viewer or WCF-Message Logging don't show any hints upon the source of the problem. I don't have this problem when I'm using a Webservice which doesn't call another Service. Calling a Webservice is done by Service Reference (I'm using no Proxy-Classes), but I think this should be no Problem. I have no idea of what is happening, nor how to solve this Problem. Regards Rene

    Read the article

  • Rails : fighting long http response times with ajax. Is it a good idea? Please, help with implementa

    - by baranov
    Hi, everybody! I've googled some tutorials, browsed some SO answers, and was unable to find a recipe for my problem. I'm writing a web site which is supposed to display almost realtime stock chart. Data is stored in constantly updating MySQL database, I wrote a find_by_sql query code which fetches all the data I need to get my chart drawn. Everything is ok, except performance - it takes from one second to one minute for different queries to fetch all the data from the database, this time includes necessary (My)SQL-server side calculations. This is simply unacceptable. I got the following idea: if the data is queried from the MySQL server one point a time instead of entire dataset, it takes only about 1-100ms to get an individual point. I imagine the data fetch process might be browser-driven. After the user presses the button in order to get a chart drawn, controller makes one request to the database and renders, say, a progress bar, say 1% ready. When the browser gets the response, it immediately makes an (ajax) request, and the server fetches the next piece of data and renders "2%". And so on, until all the data is ready and the server displays the requested chart. Could this be implemented in rails+js, is there a tutorial for solving a similar problem on the Web? I suppose if the thing is feasible at all, somebody should have already done this before. I have read several articles about ajax, I believe I do understand general principles, but never did nontrivial ajax programming myself. Thanks for your time!

    Read the article

  • PHP Post Count in Forum

    - by Chris
    I'm currently desiging a forum application, I considered using a premade but decided against it as it's useful for me to learn some of the techniques. So I've written a fairly full featured forum... great. One of the problems I want to solve is to include user data for each post, at the minute the post table includes the poster ID (obviously) and I added the poster's username at a later date so I didn't have to query the User DB for X number of posts in a thread. However, it's become apparent I now want to do this, usernames don't need to update retrospectively, however avatars, sigs, and especially post counts need to update actively, so data in some form needs keeping up to date somewhere... What would be a good way of implementing this? I obviously don't want to include any more user data on the Posts DB table than necessary, but I'm struggling to find an easy way to do this short of querying the DB for each post in a thread, which is potentially going to create a lot of traffic. How have other people solved this, I've been examining the code on some other open source apps but I can't find what I'm looking for. Is it possible to select multiple records in one query? In which case I could build an array dynamically on each page request (eg 'SQL blah blah' then a for each loop to insert the ID's). Could I join the tables each time? Do I submit a query for each post? Hmm.

    Read the article

  • A moral dilemma - What job to go for?

    - by StefanE
    Here is the story: I have accepted an offer from a gaming company to work as an senior test engineer / developer. I have not yet received an signed copy of the contract. I will get a bit less salary then I asked for and it is as well less than I have today. The company have booked flight tickets for my move over there. Now comes the problem. I did an telephone interview with a company last week and they have asked me for an in person interview and are willing to pay for flights for the meeting. This company is my first choice(and have been for a few years) and would also benefit my career and I believe I will enjoy working there more. What should I do here.. I do feel uncomfortable giving a last minute rejection when I have over the phone accepted the offer, but on the other hand they have yet produced a signed contract and as well paying me a bit less than I think I'm worth. The business is small in many ways and I don't want to end up with a bad reputation. Would be great to hear your opinions!

    Read the article

  • TimeoutException when WCF Host and Client are in the same process

    - by Pharao2k
    I've ran into a really weird problem. I am building a heavily distributed application where each app instance can either be a Host and/or Client to a WCF-Service (very p2p-like). Everything works fine, as long as the Client and the targeted Host (By which I mean the app, not the Host, since currently everything runs on a single computer (so no Firewall problems etc.)) are NOT the same. IF they are the same, then the app hangs for exactly 1 Minute and then throws a TimeoutException. WCF-Logging did not produce anything helpful. Here is a small app which demonstrates the Problem: public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } private void button1_Click(object sender, RoutedEventArgs e) { var binding = new NetTcpBinding(); var baseAddress = new Uri(@"net.tcp://localhost:4000/Test"); ServiceHost host = new ServiceHost(typeof(TestService), baseAddress); host.AddServiceEndpoint(typeof(ITestService), binding, baseAddress); var debug = host.Description.Behaviors.Find<ServiceDebugBehavior>(); if (debug == null) host.Description.Behaviors.Add(new ServiceDebugBehavior { IncludeExceptionDetailInFaults = true }); else debug.IncludeExceptionDetailInFaults = true; host.Open(); var clientBinding = new NetTcpBinding(); var testProxy = new TestProxy(clientBinding, new EndpointAddress(baseAddress)); testProxy.Test(); } } [ServiceContract] public interface ITestService { [OperationContract] void Test(); } public class TestService : ITestService { public void Test() { MessageBox.Show("foo"); } } public class TestProxy : ClientBase<ITestService>, ITestService { public TestProxy(NetTcpBinding binding, EndpointAddress remoteAddress) : base(binding, remoteAddress) { } public void Test() { Channel.Test(); } } What am I doing wrong? Regards, Pharao2k

    Read the article

  • Python multiprocessing doesn't play nicely with uuid.uuid4().

    - by yig
    I'm trying to generate a uuid for a filename, and I'm also using the multiprocessing module. Unpleasantly, all of my uuids end up exactly the same. Here is a small example: import multiprocessing import uuid def get_uuid( a ): ## Doesn't help to cycle through a bunch. #for i in xrange(10): uuid.uuid4() ## Doesn't help to reload the module. #reload( uuid ) ## Doesn't help to load it at the last minute. ## (I simultaneously comment out the module-level import). #import uuid ## uuid1() does work, but it differs only in the first 8 characters and includes identifying information about the computer. #return uuid.uuid1() return uuid.uuid4() def main(): pool = multiprocessing.Pool( 20 ) uuids = pool.map( get_uuid, range( 20 ) ) for id in uuids: print id if __name__ == '__main__': main() I peeked into uuid.py's code, and it seems to depending-on-the-platform use some OS-level routines for randomness, so I'm stumped as to a python-level solution (to do something like reload the uuid module or choose a new random seed). I could use uuid.uuid1(), but only 8 digits differ and I think there are derived exclusively from the time, which seems dangerous especially given that I'm multiprocessing (so the code could be executing at exactly the same time). Is there some Wisdom out there about this issue?

    Read the article

  • Strangely structured xml code finding last value of a certain type using java

    - by Damien.Bell
    Thus the structure is something like this: OasisReportMessagePayloadRTOReport_ItemReport_Data Under report data it's broken into categories: >>Zone >>Type >>Value >>Interval What I need to do is: Get the value if the type is equal to 'myType' and the interval value is the LARGEST. So an example of the xml might be (under report_data): OasisReport MessagePayload RTO REPORT_ITEM REPORT_DATA <zone>myZone1</zone> -- This should be the same in all reports since I only get them for 1 zone <type>myType</type> --This can change from line to line <value>12345</value>--This changes every interval <Interval>122</Interval> -- This is essentially how many 5 minute intervals have taken place since the beginning of a day, finding the "max" lets me know it's the newest data. Thereby I want to find stuff of "MyType" for the "max" interval and pull the Value (into a string, or a double, if not I can convert from string. Can someone help me with this task? Thanks! Note: I've used Xpath to handle things like this in the past, but it seems outlandish for this... as it's SO complex (since not all the reports live in the same report_item, and not all the types are the same in each report)

    Read the article

  • Project Euler problem 214, How can i make it more efficient?

    - by Once
    I am becoming more and more addicted to the Project Euler problems. However since one week I am stuck with the #214. Here is a short version of the problem: PHI() is Euler's totient function, i.e. for any given integer n, PHI(n)=numbers of k<=n for which gcd(k,n)=1. We can iterate PHI() to create a chain. For example starting from 18: PHI(18)=6 = PHI(6)=2 = PHI(2)=1. So starting from 18 we get a chain of length 4 (18,6,2,1) The problem is to calculate the sum of all primes less than 40e6 which generate a chain of length 25. I built a function that calculates the chain length of any number and I tested it for small values: it works well and fast. sum of all primes<=20 which generate a chain of length 4: 12 sum of all primes<=1000 which generate a chain of length 10: 39383 Unfortunately my algorithm doesn't scale well. When I apply it to the problem, it takes several hours to calculate... so I stop it because the Project Euler problems must be solved in less than one minute. I thought that my prime detection function might be slow so I fed the program with a list of primes <40e6 to avoid the primality test... The code runs now a little bit faster, but there is still no way to get a solution in less than a few hours (and I don't want this). So is there any "magic trick" that I am missing here ? I really don't understand how to be more efficient on this one... I am not asking for the solution, because fighting with optimization is all the fun of Project Euler. However, any small hint that could put me on the right track would be welcome. Thanks !

    Read the article

  • source of historical stock data

    - by rmeador
    I'm trying to make a stock market simulator (perhaps eventually growing into a predicting AI), but I'm having trouble finding data to use. I'm looking for a (hopefully free) source of historical stock market data. Ideally, it would be a very fine-grained (second or minute interval) data set with price and volume of every symbol on NASDAQ and NYSE (and perhaps others if I get adventurous). Does anyone know of a source for such info? I found this question which indicates Yahoo offers historical data in CSV format, but I've been unable to find out how to get it in a cursory examination of the site linked. I also don't like the idea of downloading the data piecemeal in CSV files... I imagine Yahoo would get upset and shut me off after the first few thousand requests. I also discovered another question that made me think I'd hit the jackpot, but unfortunately that OpenTick site seems to have closed its doors... too bad, since I think they were exactly what I wanted. I'd also be able to use data that's just open/close price and volume of every symbol every day, but I'd prefer all the data if I can get it. Any other suggestions?

    Read the article

  • How can I test a CRON job with PHP?

    - by alex
    This is the first time I've ever used a CRON. I'm using it to parse external data that is automatically FTP'd to a subdirectory on our site. I have created a controller and model which handles the data. I can access the URL fine in my browser and it works (however I will be restricting this soon). My problem is, how can I test if it's working? I've added this to my controller for a quick and dirty log $file = 'test.txt'; $contents = ''; if (file_exists($file)) { $contents = file_get_contents($file); } $contents .= date('m-d-Y') . ' --- ' . PHP_SAPI . "\n\n"; file_put_contents($file, $contents); But so far only got requests logged from myself from the browser, despite having my CRON running ever minute. 03-18-2010 --- cgi-fcgi 03-18-2010 --- cgi-fcgi I've set it up using cPanel with the command index.php properties/update/ the 2nd portion is what I use to access the page in my browser. So how can I test this is working properly, and have I stuffed anything up? Note: I'm using Kohana 3. Many thanks

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >