Search Results

Search found 5638 results on 226 pages for 'scheduling algorithm'.

Page 166/226 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • parallel_for_each from amp.h – part 1

    - by Daniel Moth
    This posts assumes that you've read my other C++ AMP posts on index<N> and extent<N>, as well as about the restrict modifier. It also assumes you are familiar with C++ lambdas (if not, follow my links to C++ documentation). Basic structure and parameters Now we are ready for part 1 of the description of the new overload for the concurrency::parallel_for_each function. The basic new parallel_for_each method signature returns void and accepts two parameters: a grid<N> (think of it as an alias to extent) a restrict(direct3d) lambda, whose signature is such that it returns void and accepts an index of the same rank as the grid So it looks something like this (with generous returns for more palatable formatting) assuming we are dealing with a 2-dimensional space: // some_code_A parallel_for_each( g, // g is of type grid<2> [ ](index<2> idx) restrict(direct3d) { // kernel code } ); // some_code_B The parallel_for_each will execute the body of the lambda (which must have the restrict modifier), on the GPU. We also call the lambda body the "kernel". The kernel will be executed multiple times, once per scheduled GPU thread. The only difference in each execution is the value of the index object (aka as the GPU thread ID in this context) that gets passed to your kernel code. The number of GPU threads (and the values of each index) is determined by the grid object you pass, as described next. You know that grid is simply a wrapper on extent. In this context, one way to think about it is that the extent generates a number of index objects. So for the example above, if your grid was setup by some_code_A as follows: extent<2> e(2,3); grid<2> g(e); ...then given that: e.size()==6, e[0]==2, and e[1]=3 ...the six index<2> objects it generates (and hence the values that your lambda would receive) are:    (0,0) (1,0) (0,1) (1,1) (0,2) (1,2) So what the above means is that the lambda body with the algorithm that you wrote will get executed 6 times and the index<2> object you receive each time will have one of the values just listed above (of course, each one will only appear once, the order is indeterminate, and they are likely to call your code at the same exact time). Obviously, in real GPU programming, you'd typically be scheduling thousands if not millions of threads, not just 6. If you've been following along you should be thinking: "that is all fine and makes sense, but what can I do in the kernel since I passed nothing else meaningful to it, and it is not returning any values out to me?" Passing data in and out It is a good question, and in data parallel algorithms indeed you typically want to pass some data in, perform some operation, and then typically return some results out. The way you pass data into the kernel, is by capturing variables in the lambda (again, if you are not familiar with them, follow the links about C++ lambdas), and the way you use data after the kernel is done executing is simply by using those same variables. In the example above, the lambda was written in a fairly useless way with an empty capture list: [ ](index<2> idx) restrict(direct3d), where the empty square brackets means that no variables were captured. If instead I write it like this [&](index<2> idx) restrict(direct3d), then all variables in the some_code_A region are made available to the lambda by reference, but as soon as I try to use any of those variables in the lambda, I will receive a compiler error. This has to do with one of the direct3d restrictions, where only one type can be capture by reference: objects of the new concurrency::array class that I'll introduce in the next post (suffice for now to think of it as a container of data). If I write the lambda line like this [=](index<2> idx) restrict(direct3d), all variables in the some_code_A region are made available to the lambda by value. This works for some types (e.g. an integer), but not for all, as per the restrictions for direct3d. In particular, no useful data classes work except for one new type we introduce with C++ AMP: objects of the new concurrency::array_view class, that I'll introduce in the post after next. Also note that if you capture some variable by value, you could use it as input to your algorithm, but you wouldn’t be able to observe changes to it after the parallel_for_each call (e.g. in some_code_B region since it was passed by value) – the exception to this rule is the array_view since (as we'll see in a future post) it is a wrapper for data, not a container. Finally, for completeness, you can write your lambda, e.g. like this [av, &ar](index<2> idx) restrict(direct3d) where av is a variable of type array_view and ar is a variable of type array - the point being you can be very specific about what variables you capture and how. So it looks like from a large data perspective you can only capture array and array_view objects in the lambda (that is how you pass data to your kernel) and then use the many threads that call your code (each with a unique index) to perform some operation. You can also capture some limited types by value, as input only. When the last thread completes execution of your lambda, the data in the array_view or array are ready to be used in the some_code_B region. We'll talk more about all this in future posts… (a)synchronous Please note that the parallel_for_each executes as if synchronous to the calling code, but in reality, it is asynchronous. I.e. once the parallel_for_each call is made and the kernel has been passed to the runtime, the some_code_B region continues to execute immediately by the CPU thread, while in parallel the kernel is executed by the GPU threads. However, if you try to access the (array or array_view) data that you captured in the lambda in the some_code_B region, your code will block until the results become available. Hence the correct statement: the parallel_for_each is as-if synchronous in terms of visible side-effects, but asynchronous in reality.   That's all for now, we'll revisit the parallel_for_each description, once we introduce properly array and array_view – coming next. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Any task-control algorithms programming practices?

    - by NumberFour
    Hi, I was just wondering if there's any field which concerns the task-control programming (or at least that's the way I call it). For a better explanation of task-control consider the following scenario: An application (master-thread) waits for a command - which might be a particular action or a set of actions the application should perform. When a command is received the master-thread creates a task (= spawns an independent thread which actually does the action) and adds a record in it's task-list - thus keeping track of the time of execution, thread handle, task priority...etc. The master-thread awaits for any other incoming commands while taking care of all the tasks - e.g: kills tasks running too long, prioritizes tasks with higher priorities, kills a task on a request of another task, limits the number of currently running tasks, allows task scheduling, cleans finished tasks (threads) and so on. The model is pretty similar to what we can see in OS dealing with running processes. Are there any good practices programming such task-models or is there some theoretical work done in this field? Maybe my question is too generalized, but at least I wanted to know whether there are any experiences working on such models or if there's a better approach. Thanks for any answers.

    Read the article

  • issue with std::advance on std::sets

    - by tim
    I've stumbled upon what I believe is a bug in the stl algorithm advance. When I'm advancing the iterator off of the end of the container, I get inconsistent results. Sometimes I get container.end(), sometimes I get the last element. I've illustrated this with the following code: #include <algorithm> #include <cstdio> #include <set> using namespace std; typedef set<int> tMap; int main(int argc, char** argv) { tMap::iterator i; tMap the_map; for (int i=0; i<10; i++) the_map.insert(i); #if EXPERIMENT==1 i = the_map.begin(); #elif EXPERIMENT==2 i = the_map.find(4); #elif EXPERIMENT==3 i = the_map.find(5); #elif EXPERIMENT==4 i = the_map.find(6); #elif EXPERIMENT==5 i = the_map.find(9); #elif EXPERIMENT==6 i = the_map.find(2000); #else i = the_map.end(); #endif advance(i, 100); if (i == the_map.end()) printf("the end\n"); else printf("wuh? %d\n", *i); return 0; } Which I get the following unexpected (according to me) behavior in experiment 3 and 5 where I get the last element instead of the_map.end(). [tim@saturn advance]$ uname -srvmpio Linux 2.6.18-1.2798.fc6 #1 SMP Mon Oct 16 14:37:32 EDT 2006 i686 athlon i386 GNU/Linux [tim@saturn advance]$ g++ --version g++ (GCC) 4.1.1 20061011 (Red Hat 4.1.1-30) Copyright (C) 2006 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [tim@saturn advance]$ g++ -DEXPERIMENT=1 advance.cc [tim@saturn advance]$ ./a.out the end [tim@saturn advance]$ g++ -DEXPERIMENT=2 advance.cc [tim@saturn advance]$ ./a.out the end [tim@saturn advance]$ g++ -DEXPERIMENT=3 advance.cc [tim@saturn advance]$ ./a.out wuh? 9 [tim@saturn advance]$ g++ -DEXPERIMENT=4 advance.cc [tim@saturn advance]$ ./a.out the end [tim@saturn advance]$ g++ -DEXPERIMENT=5 advance.cc [tim@saturn advance]$ ./a.out wuh? 9 [tim@saturn advance]$ g++ -DEXPERIMENT=6 advance.cc [tim@saturn advance]$ ./a.out the end [tim@saturn advance]$ g++ -DEXPERIMENT=7 advance.cc [tim@saturn advance]$ ./a.out the end [tim@saturn advance]$ From the sgi website (see link at top), it has the following example: list<int> L; L.push_back(0); L.push_back(1); list<int>::iterator i = L.begin(); advance(i, 2); assert(i == L.end()); I would think that the assertion should apply to other container types, no? What am I missing? Thanks!

    Read the article

  • Python coding test problem for interviews

    - by Kal
    I'm trying to come up with a good coding problem to ask interview candidates to solve with Python. They'll have an hour to work on the problem, with an IDE and access to documentation (we don't care what people have memorized). I'm not looking for a tough algorithmic problem - there are other sections of the interview where we do that kind of thing. The point of this section is to sit and watch them actually write code. So it should be something that makes them use just the data structures which are the everyday tools of the application developer - lists, hashtables (dictionaries in Python), etc, to solve a quasi-realistic task. They shouldn't be blocked completely if they can't think of something really clever. We have a problem which we use for Java coding tests, which involves reading a file and doing a little processing on the contents. It works well with candidates who are familiar with Java (or even C++). But we're running into a number of candidates who just don't know Java or C++ or C# or anything like that, but do know Python or Ruby. Which shouldn't exclude them, but leaves us with a dilemma: On the one hand, we don't learn much from watching someone struggle with the basics of a totally unfamiliar language. On the other hand, the problem we use for Java turns out to be pretty trivial in Python (or Ruby, etc) - anyone halfway competent can do it in 15 minutes. So, I'm trying to come up with something better. Surprisingly, Google doesn't show me anyone doing something like this, unless I'm just too dumb to enter the obvious search term. The best idea I've come up with involves scheduling workers to time slots, but it's maybe a little too open-ended. Have you run into a good example? Or a bad one? Or do you just have an idea?

    Read the article

  • Cron Tips for not running cron jobs on holidays (the monday of a three day weekend)

    - by Poul
    We have about a one hundred machine set-up with each machine running cron jobs like starting and stopping services and archiving these services' log files at the end of the day to a centralized repository. One headache we have is the three-day weekend (we're closed on holidays). We don't want the services starting up on those days and connecting to our business partner's machines. We currently do this by manually commenting out the most critical jobs and letting a bunch of errors happen all day. Not ideal. Basically if a job has '1-5' set in the day field we want this to mean 'work days' and not Monday to Friday'. We have a database that keeps track of which days are indeed 'work days' So, is it possible to override Cron's day-matching algorithm, or is there some other way to easily set a cron setting to avoid things starting up on a Monday holiday? Thanks!

    Read the article

  • best practice for directory polling

    - by Hieu Lam
    Hi all, I have to do batch processing to automate business process. I have to poll directory at regular interval to detect new files and do processing. While old files is being processed, new files can come in. For now, I use quartz scheduler and thread synchronization to ensure that only one thread can process files. Part of the code are: application-context.xml <bean id="methodInvokingJob" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean" <property name="targetObject" ref="documentProcessor" / <property name="targetMethod" value="processDocuments" / </bean DocumentProcessor ..... public void processDocuments() { LOG.info(Thread.currentThread().getName() + " attempt to run."); if (!processing) { synchronized (this) { try { processing = true; LOG.info(Thread.currentThread().getName() + " is processing"); List xmlDocuments = documentManager.getFileNamesFromFolder(incomingFolderPath); // loop over the files and processed unlock files. for (String xmlDocument : xmlDocuments) { processDocument(xmlDocument); } } finally { processing = false; } } } } For the current code, I have to prevent other thread to process files when one thread is processing. Is that a good idea ? or we support multi-threaded processing. In that case how can I know which files is being process and which files has just arrived ? Any idea is really appreciated.

    Read the article

  • rails foreman does not load all my services on start

    - by Rubytastic
    Rails foreman does not load all my services defined in Procfile. Procfile.rb: redis: redis-server resque: bundle exec rake resque:start &&> log/resque_worker_queue.log privpub: bundle exec rackup private_pub.ru -s thin -E production & &> log/private_pub.log sunspot: bundle exec rake sunspot:solr:run I always have to manually start all of them by copy paste the commands in terminal foreman start does not work, what am i missing? This is foreman output: 12:35:40 privpub.1 | process terminated 12:35:40 system | sending SIGTERM to all processes 12:35:40 system | sending SIGTERM to pid 4375 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 # Received SIGTERM, scheduling shutdown... 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 # User requested shutdown... 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 * Saving the final RDB snapshot before exiting. 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 * DB saved on disk 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 # Redis is now ready to exit, bye bye... 12:35:40 system | sending SIGTERM to pid 4376 12:35:40 resque.1 | rake aborted! 12:35:40 resque.1 | SIGTERM 12:35:40 resque.1 | 12:35:40 resque.1 | (See full trace by running task with --trace) 12:35:40 system | sending SIGTERM to pid 4378 12:35:40 sunspot.1 | rake aborted! 12:35:40 sunspot.1 | SIGTERM 12:35:40 sunspot.1 | 12:35:40 sunspot.1 | (See full trace by running task with --trace) 12:35:40 sunspot.1 | process terminated 12:35:40 resque.1 | process terminated 12:35:40 redis.1 | process terminated

    Read the article

  • Experience migrating legacy Cobol/PL1 to Java

    - by MadMurf
    ORIGINAL Q: I'm wondering if anyone has had experience of migrating a large Cobol/PL1 codebase to Java? How automated was the process and how maintainable was the output? How did the move from transactional to OO work out? Any lessons learned along the way or resources/white papers that may be of benefit would be appreciated. EDIT 7/7: Certainly the NACA approach is interesting, the ability to continue making your BAU changes to the COBOL code right up to the point of releasing the JAVA version has merit for any organization. The argument for procedural Java in the same layout as the COBOL to give the coders a sense of comfort while familiarizing with the Java language is a valid argument for a large organisation with a large code base. As @Didier points out the $3mil annual saving gives scope for generous padding on any BAU changes going forward to refactor the code on an ongoing basis. As he puts it if you care about your people you find a way to keep them happy while gradually challenging them. The problem as I see it with the suggestion from @duffymo to Best to try and really understand the problem at its roots and re-express it as an object-oriented system is that if you have any BAU changes ongoing then during the LONG project lifetime of coding your new OO system you end up coding & testing changes on the double. That is a major benefit of the NACA approach. I've had some experience of migrating Client-Server applications to a web implementation and this was one of the major issues we encountered, constantly shifting requirements due to BAU changes. It made PM & scheduling a real challenge. Thanks to @hhafez who's experience is nicely put as "similar but slightly different" and has had a reasonably satisfactory experience of an automatic code migration from Ada to Java. Thanks @Didier for contributing, I'm still studying your approach and if I have any Q's I'll drop you a line.

    Read the article

  • AAC 256kbit to MP3 320kbit conversion. I know it's lossy, but how?

    - by Fabian Zeindl
    Has anyone ever transcoded music from a high-quality aac to an mp3 (or vice-versa). The internet is full of people who say this should never be done, but apart from the theoretical standpoint that you can only lose information, does it matter in practise? is the difference perceivable, except on studio-equipment? does the re-encoding actually lose much information? If, p.e., high frequences are chopped away by the initial compression, those frequencies aren't there anymore, so this part of the compression-algorithm won't touch the data during the second compression. Am i wrong?

    Read the article

  • Is SHA-1 secure for password storage?

    - by Tgr
    Some people throw around remarks like "SHA-1 is broken" a lot, so I'm trying to understand what exactly that means. Let's assume I have a database of SHA-1 password hashes, and an attacker whith a state of the art SHA-1 breaking algorithm and a botnet with 100,000 machines gets access to it. (Having control over 100k home computers would mean they can do about 10^15 operations per second.) How much time would they need to find out the password of any one user? find out the password of a given user? find out the password of all users? find a way to log in as one of the users? find a way to log in as a specific user? How does that change if the passwords are salted? Does the method of salting (prefix, postfix, both, or something more complicated like xor-ing) matter? Here is my current understanding, after some googling. Please correct in the answers if I misunderstood something. If there is no salt, a rainbow attack will immediately find all passwords (except extremely long ones). If there is a sufficiently long random salt, the most effective way to find out the passwords is a brute force or dictionary attack. Neither collision nor preimage attacks are any help in finding out the actual password, so cryptographic attacks against SHA-1 are no help here. It doesn't even matter much what algorithm is used - one could even use MD5 or MD4 and the passwords would be just as safe (there is a slight difference because computing a SHA-1 hash is slower). To evaluate how safe "just as safe" is, let's assume that a single sha1 run takes 1000 operations and passwords contain uppercase, lowercase and digits (that is, 60 characters). That means the attacker can test 1015*60*60*24 / 1000 ~= 1017 potential password a day. For a brute force attack, that would mean testing all passwords up to 9 characters in 3 hours, up to 10 characters in a week, up to 11 characters in a year. (It takes 60 times as much for every additional character.) A dictionary attack is much, much faster (even an attacker with a single computer could pull it off in hours), but only finds weak passwords. To log in as a user, the attacker does not need to find out the exact password; it is enough to find a string that results in the same hash. This is called a first preimage attack. As far as I could find, there are no preimage attacks against SHA-1. (A bruteforce attack would take 2160 operations, which means our theoretical attacker would need 1030 years to pull it off. Limits of theoretical possibility are around 260 operations, at which the attack would take a few years.) There are preimage attacks against reduced versions of SHA-1 with negligible effect (for the reduced SHA-1 which uses 44 steps instead of 80, attack time is down from 2160 operations to 2157). There are collision attacks against SHA-1 which are well within theoretical possibility (the best I found brings the time down from 280 to 252), but those are useless against password hashes, even without salting. In short, storing passwords with SHA-1 seems perfectly safe. Did I miss something?

    Read the article

  • How secure is the encryption used by Microsoft Office 2007?

    - by ericl42
    I've read various articles about Microsoft's Office 2007 encryption and from what I gather 2007 is secure using all default options due to it using AES, and 2000 and 2003 can be configured secure by changing the default algorithm to AES. I was wondering if anyone else has read any other articles or know of any specific vulnerabilities involved with how they implement the encryption. I would like to be able to tell users that they can use this to send semi-sensitive documents as long as they use AES and a strong password. Thanks for the information.

    Read the article

  • SASL + postfixadmin - SMTP authentication with hashed password

    - by mateo
    Hi all, I'm trying to set up the mail server. I have problem with my SMTP authentication using sasl. I'm using postfixadmin to create my mailboxes, the password is in some kind of md5, postfixadmin config.inc.php: $CONF['encrypt'] = 'md5crypt'; $CONF['authlib_default_flavor'] = 'md5raw'; the sasl is configured like that (/etc/postfix/sasl/smtpd.conf): pwcheck_method: auxprop auxprop_plugin: sql sql_engine: mysql mech_list: plain login cram-md5 digest-md5 sql_hostnames: 127.0.0.1 sql_user: postfix sql_passwd: **** sql_database: postfix sql_select: SELECT password FROM mailbox WHERE username = '%u@%r' log_level: 7 If I want to authenticate (let's say from Thunderbird) with my password, I can't. If I use hashed password from MySQL I can authenticate and send an email. So I think the problem is with hash algorithm. Do you know how to set up the SASL (or postfixadmin) to work fine together. I don't want to store my passwords in plain text...

    Read the article

  • The best cross platform (portable) arbitrary precision math library

    - by Siu Ching Pong - Asuka Kenji
    Dear ninjas / hackers / wizards, I'm looking for a good arbitrary precision math library in C or C++. Could you please give me some advices / suggestions? The primary requirements: It MUST handle arbitrarily big integers (my primary interest is on integers). In case that you don't know what the word arbitrarily big means, imagine something like 100000! (the factorial of 100000). The precision MUST NOT NEED to be specified during library initialization / object creation. The precision should ONLY be constrained by the available resources of the system. It SHOULD utilize the full power of the platform, and should handle "small" numbers natively. That means on a 64-bit platform, calculating 2^33 + 2^32 should use the available 64-bit CPU instructions. The library SHOULD NOT calculate this in the same way as it does with 2^66 + 2^65 on the same platform. It MUST handle addition (+), subtraction (-), multiplication (*), integer division (/), remainder (%), power (**), increment (++), decrement (--), gcd(), factorial(), and other common integer arithmetic calculations efficiently. Ability to handle functions like sqrt() (square root), log() (logarithm) that do not produce integer results is a plus. Ability to handle symbolic computations is even better. Here are what I found so far: Java's BigInteger and BigDecimal class: I have been using these so far. I have read the source code, but I don't understand the math underneath. It may be based on theories / algorithms that I have never learnt. The built-in integer type or in core libraries of bc / Python / Ruby / Haskell / Lisp / Erlang / OCaml / PHP / some other languages: I have ever used some of these, but I have no idea on which library they are using, or which kind of implementation they are using. What I have already known: Using a char as a decimal digit, and a char* as a decimal string and do calculations on the digits using a for-loop. Using an int (or a long int, or a long long) as a basic "unit" and an array of it as an arbitrary long integer, and do calculations on the elements using a for-loop. Booth's multiplication algorithm What I don't know: Printing the binary array mentioned above in decimal without using naive methods. Example of a naive method: (1) add the bits from the lowest to the highest: 1, 2, 4, 8, 16, 32, ... (2) use a char* string mentioned above to store the intermediate decimal results). What I appreciate: Good comparisons on GMP, MPFR, decNumber (or other libraries that are good in your opinion). Good suggestions on books / articles that I should read. For example, an illustration with figures on how a un-naive arbitrarily long binary to decimal conversion algorithm works is good. Any help. Please DO NOT answer this question if: you think using a double (or a long double, or a long long double) can solve this problem easily. If you do think so, it means that you don't understand the issue under discussion. you have no experience on arbitrary precision mathematics. Thank you in advance! Asuka Kenji

    Read the article

  • Exporting/Importing events to Outlook 2007 calendar - problem

    - by iandisme
    I work on a web app that involves scheduling. A user can view his schedule, and then download a meeting request file for a particular event. In Outlook 2003, simply opening this event would cause a meeting request to pop up and the user could accept, which would either add or update the event in their calendar. However, in Outlook 2007, the meeting request Accept function is disabled, and the reason given is that the user is the organizer and can't accept his own event request. The ICS file clearly shows that this is not the case. Has anyone experienced this same problem? Does anyone know how to work around it? (Using Outlook's import function is scarcely an option because it causes duplicate events to be created; the import function doesn't seem to care that the events have the same UID) Here is the ICS file: BEGIN:VCALENDAR PRODID:#{my app} VERSION:2.0 CALSCALE:GREGORIAN METHOD:REQUEST BEGIN:VEVENT DTSTAMP:20100324T150236Z UID:eeb639a1-f8e5-4eab-ab3c-232ad91364c6 SEQUENCE:2 ORGANIZER:#{myApp}.#{myDomain}.com DESCRIPTION: DTSTART;TZID=Europe/London:20110620T120010 DTEND;TZID=Europe/London:20110620T133010 SUMMARY:BREAK:Breakfast LOCATION:Room 101 END:VEVENT BEGIN:VTIMEZONE //Timezone info edited for brevity END:VTIMEZONE END:VCALENDAR

    Read the article

  • How to give ASP.NET access to a private key in a certificate in the certificate store?

    - by thames
    I have an ASP.NET application that accesses private key in a certificate in the certificates store. On Windows Server 2003 I was able to use winhttpcertcfg.exe to give private key access to the NETWORK SERVICE account. How do I give permissions to access a Private Key in a certificate in the certificate store (Local Computer\Personal) on a Windows Server 2008 R2 in an IIS 7.5 website? I've tried giving Full Trust access to "Everyone", "IIS AppPool\DefaultAppPool", "IIS_IUSRS", and everyother security account I could find using the Certificates MMC (Server 2008 R2). However the below code demonstrates that the code does not have access to the Private Key of a certificate that was imported with the private key. The code instead throws and error everytime the private key property is accessed. Default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <%@ Import Namespace="System.Security.Cryptography.X509Certificates" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:Repeater ID="repeater1" runat="server"> <HeaderTemplate> <table> <tr> <td> Cert </td> <td> Public Key </td> <td> Private Key </td> </tr> </HeaderTemplate> <ItemTemplate> <tr> <td> <%#((X509Certificate2)Container.DataItem).GetNameInfo(X509NameType.SimpleName, false) %> </td> <td> <%#((X509Certificate2)Container.DataItem).HasPublicKeyAccess() %> </td> <td> <%#((X509Certificate2)Container.DataItem).HasPrivateKeyAccess() %> </td> </tr> </ItemTemplate> <FooterTemplate> </table></FooterTemplate> </asp:Repeater> </div> </form> </body> </html> Default.aspx.cs using System; using System.Security.Cryptography; using System.Security.Cryptography.X509Certificates; using System.Web.UI; public partial class _Default : Page { public X509Certificate2Collection Certificates; protected void Page_Load(object sender, EventArgs e) { // Local Computer\Personal var store = new X509Store(StoreLocation.LocalMachine); // create and open store for read-only access store.Open(OpenFlags.ReadOnly); Certificates = store.Certificates; repeater1.DataSource = Certificates; repeater1.DataBind(); } } public static class Extensions { public static string HasPublicKeyAccess(this X509Certificate2 cert) { try { AsymmetricAlgorithm algorithm = cert.PublicKey.Key; } catch (Exception ex) { return "No"; } return "Yes"; } public static string HasPrivateKeyAccess(this X509Certificate2 cert) { try { string algorithm = cert.PrivateKey.KeyExchangeAlgorithm; } catch (Exception ex) { return "No"; } return "Yes"; } }

    Read the article

  • Bypassing confirmation prompt of an external process

    - by Alidad
    How can I convert this Perl code to Groovy? How to bypass confirmation prompts of an external process? I am trying to convert a Perl script to Groovy. The program is loading/delete maestro (job scheduling) jobs automatically. The problem is the delete command will prompt for confirmation (Y/N) on every single job that it finds. I tried the process execute in groovy but will stop at the prompts. The Perl script is writing bunch of Ys to the stream and print it to the handler( if I understood it correctly) to avoid stopping. I am wondering how to do the same thing in Groovy ? Or any other approach to execute a command and somehow write Y on every confirmation prompt. Perl Script: $maestrostring=""; while ($x < 1500) { $maestrostring .= "y\n"; $x++; } # delete the jobs open(MAESTRO_CMD, "|ssh mserver /bin/composer delete job=pserver#APPA@") print MAESTRO_CMD $maestrostring; close(MAESTRO_CMD); This is my groovy code so far: def deleteMaestroJobs (){ ... def commandSched ="ssh $maestro_server /bin/composer delete sched=$primary_server#$app_acronym$app_level@" def commandJobs ="ssh $maestro_server /bin/composer delete job=$primary_server#$app_acronym$app_level@" try { executeCommand commandJobs } catch (Exception ex ){ throw new Exception("Error executing the Maestro Composer [DELETE]") } try { executeCommand commandSched } catch (Exception ex ){ throw new Exception("Error executing the Maestro Composer [DELETE]") } } def executeCommand(command){ def process = command.execute() process.withWriter { writer -> 1500.times {writer.println 'Y' } } process.consumeProcessOutput(System.out, System.err) process.waitFor() }

    Read the article

  • Logging with Quartz.net

    - by Young Ninja
    I will shamelessly state that I have little experience with Log4Net... I only just installed it, but it won't capture log events from Quartz.net, which is a scheduling library. Apparently Quartz.net uses Commons Logging and that needs to be configured to point to my Log4Net settings. Unfortunately, it doesn't seem to work. Help is appreciated: <configSections> ... <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <section name="commonLogging" type="Common.Logging.ConfigurationSectionHandler, Common.Logging"/> </configSections> <!-- Log4net error handling --> <log4net> <appender name="LogFileAppender" type="log4net.Appender.FileAppender"> <param name="File" value="Admin/LabSlice.log" /> <param name="AppendToFile" value="true" /> <layout type="log4net.Layout.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c %m%n" /> </layout> </appender> <root> <level value="INFO" /> <appender-ref ref="LogFileAppender" /> </root> </log4net> <!-- Commons logging (Quart.net logs) --> <commonLogging> <logging> <factoryAdapter type="Common.Logging.Log4Net.Log4NetLoggerFactoryAdapter, Common.Logging.Log4net"> <arg key="configType" value="INLINE" /> </factoryAdapter> </logging> </commonLogging>

    Read the article

  • Corrupted .WAR file after transfer from 32-64 bit Windows Server to Desktop or vice versa

    - by Albert Widjaja
    Hi All, Does anyone experience this problem of corrupted .WAR file after it has been copied over the network share ? this is .WAR file (Web Archive) the J2EE application file (.WAR file is compressed with the same zip algorithm i think ?) Scenario 1: Windows Server 2008 x64 transfer into Windows XP using RDP client (Local Devices and Resources) Scenario 2: Windows XP 32 bit transfer into Windows Server 2003 x64 using shared network drive (port 445 SMB ?) for both of the scenario it always failed / corrupted (the source code seems to be duplicated at the end of line when you open up in the Eclipse / Java IDE). but when in both scenario i compressed the file into .ZIP file everything is OK. can anyone explains why this problem happens ? Thanks, Albert

    Read the article

  • Creating hard drive backup images efficiently

    - by Arrieta
    We are in the process of pruning our directories to recuperate some disk space. The 'algorithm' for the pruning/backup process consists of a list of directories and, for each one of them, a set of rules, e.g. 'compress *.bin', 'move *.blah', 'delete *.crap', 'leave *.important'; these rules change from directory to directory but are well known. The compressed and moved files are stored in a temporary file system, burned onto a blue ray, tested within the blue ray, and, finally, deleted from their original locations. I am doing this in Python (basically a walk statement with a dictionary with the rules for each extension in each folder). Do you recommend a better methodology for pruning file systems? How do you do it? We run on Linux.

    Read the article

  • Incremental PCA

    - by smichak
    Hi, Lately, I've been looking into an implementation of an incremental PCA algorithm in python - I couldn't find something that would meet my needs so I did some reading and implemented an algorithm I found in some paper. Here is the module's code - the relevant paper on which it is based is mentioned in the module's documentation. I would appreciate any feedback from people who are interested in this. Micha #!/usr/bin/env python """ Incremental PCA calculation module. Based on P.Hall, D. Marshall and R. Martin "Incremental Eigenalysis for Classification" which appeared in British Machine Vision Conference, volume 1, pages 286-295, September 1998. Principal components are updated sequentially as new observations are introduced. Each new observation (x) is projected on the eigenspace spanned by the current principal components (U) and the residual vector (r = x - U(U.T*x)) is used as a new principal component (U' = [U r]). The new principal components are then rotated by a rotation matrix (R) whose columns are the eigenvectors of the transformed covariance matrix (D=U'.T*C*U) to yield p + 1 principal components. From those, only the first p are selected. """ __author__ = "Micha Kalfon" import numpy as np _ZERO_THRESHOLD = 1e-9 # Everything below this is zero class IPCA(object): """Incremental PCA calculation object. General Parameters: m - Number of variables per observation n - Number of observations p - Dimension to which the data should be reduced """ def __init__(self, m, p): """Creates an incremental PCA object for m-dimensional observations in order to reduce them to a p-dimensional subspace. @param m: Number of variables per observation. @param p: Number of principle components. @return: An IPCA object. """ self._m = float(m) self._n = 0.0 self._p = float(p) self._mean = np.matrix(np.zeros((m , 1), dtype=np.float64)) self._covariance = np.matrix(np.zeros((m, m), dtype=np.float64)) self._eigenvectors = np.matrix(np.zeros((m, p), dtype=np.float64)) self._eigenvalues = np.matrix(np.zeros((1, p), dtype=np.float64)) def update(self, x): """Updates with a new observation vector x. @param x: Next observation as a column vector (m x 1). """ m = self._m n = self._n p = self._p mean = self._mean C = self._covariance U = self._eigenvectors E = self._eigenvalues if type(x) is not np.matrix or x.shape != (m, 1): raise TypeError('Input is not a matrix (%d, 1)' % int(m)) # Update covariance matrix and mean vector and centralize input around # new mean oldmean = mean mean = (n*mean + x) / (n + 1.0) C = (n*C + x*x.T + n*oldmean*oldmean.T - (n+1)*mean*mean.T) / (n + 1.0) x -= mean # Project new input on current p-dimensional subspace and calculate # the normalized residual vector g = U.T*x r = x - (U*g) r = (r / np.linalg.norm(r)) if not _is_zero(r) else np.zeros_like(r) # Extend the transformation matrix with the residual vector and find # the rotation matrix by solving the eigenproblem DR=RE U = np.concatenate((U, r), 1) D = U.T*C*U (E, R) = np.linalg.eigh(D) # Sort eigenvalues and eigenvectors from largest to smallest to get the # rotation matrix R sorter = list(reversed(E.argsort(0))) E = E[sorter] R = R[:,sorter] # Apply the rotation matrix U = U*R # Select only p largest eigenvectors and values and update state self._n += 1.0 self._mean = mean self._covariance = C self._eigenvectors = U[:, 0:p] self._eigenvalues = E[0:p] @property def components(self): """Returns a matrix with the current principal components as columns. """ return self._eigenvectors @property def variances(self): """Returns a list with the appropriate variance along each principal component. """ return self._eigenvalues def _is_zero(x): """Return a boolean indicating whether the given vector is a zero vector up to a threshold. """ return np.fabs(x).min() < _ZERO_THRESHOLD if __name__ == '__main__': import sys def pca_svd(X): X = X - X.mean(0).repeat(X.shape[0], 0) [_, _, V] = np.linalg.svd(X) return V N = 1000 obs = np.matrix([np.random.normal(size=10) for _ in xrange(N)]) V = pca_svd(obs) print V[0:2] pca = IPCA(obs.shape[1], 2) for i in xrange(obs.shape[0]): x = obs[i,:].transpose() pca.update(x) U = pca.components print U

    Read the article

  • Intent.putExtras not consistent

    - by martinjd
    I have a weird situation with AlarmManager. I am scheduling an event with AlarmManager and passing in a string using intent.putExtra. The string is either silent or vibrate and when the receiver fires the phone should either turn of the ringer or set the phone to vibrate. The log statement correctly outputs the expected value each time. Intent intent; if (eventType.equals("start")) { intent = new Intent(context, SReceiver.class); } else { intent = new Intent(context, EReceiver.class); } intent.setAction(eventType+Long.toString(newId)); Log.v("EditQT",ringerModeType.toUpperCase()); intent.putExtra("ringerModeType", ringerModeType.toUpperCase()); PendingIntent appIntent = PendingIntent.getBroadcast(context, 0, intent, 0); AlarmManager alarmManager = (AlarmManager) getSystemService (Context.ALARM_SERVICE); alarmManager.set(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), appIntent); The receiver that fires when the alarm executes also has a log statement and I can see the first time around that the statement outputs the expected string either SILENT or VIBRATE. The alarm executes and then I change the value for putExtra to opposite string and the receiver still displays the previous value event though the call from the code above shows that the new value was passed in. The value for setAction is the same each time. audioManager = (AudioManager) context.getSystemService(Activity.AUDIO_SERVICE); Log.v("Start",intent.getExtras().get("ringerModeType").toString()); if (intent.getExtras().get("ringerModeType").equals("SILENTMODE")) { audioManager.setRingerMode(AudioManager.RINGER_MODE_SILENT); } else { audioManager.setRingerMode(AudioManager.RINGER_MODE_VIBRATE); } Any thoughts?

    Read the article

  • Technically why is processes in Erlang more efficient than OS threads?

    - by Jonas
    Spawning processes seam to be much more efficient in Erlang than starting new threads using the OS (i.e. by using Java or C). Erlang spawns thousands and millions of processes in a short period of time. I know that they are different since Erlang do per process GC and processes in Erlang are implemented the same way on all platforms which isn't the case with OS threads. But I don't fully understand technically why Erlang processes are so much more efficient and has much smaller memory footprint per process. Both the OS and Erlang VM has to do scheduling, context switches and keep track of the values in the registers and so on... Simply why isn't OS threads implemented in the same way as processes in Erlang? Does it have to support something more? and why does it need a bigger memory footprint? Technically why is processes in Erlang more efficient than OS threads? And why can't threads in the OS be implemented and managed in the same efficient way?

    Read the article

  • Use of clone() and appendTo() with draggable - unexpected results with dragging

    - by Matt Gutting
    I'm constructing a UI for a doctor scheduling app. We have several doctors, each of whom can go in one of several locations on a scheduled day (M - F). I have the main day/location grid (table) in the center of the screen. At the left is a table for the doctor names. On loading, each table cell contains a span (outlined) with the doctor name. The doctor name can go in one slot for each day. I didn't want to just put 5 copies of the doctor name, because the doctor might not be available all 5 days of the week. The idea was: Drag the span and drop into the calendar table. On the drag "start" event, clone the span and append it to the table cell. Now there is another span ready to be dropped into the calendar table. One line of code does the work: $(ui.helper).clone(true).prependTo(ui.helper.parent()); This works. But when I move the cloned span, the original one moves in sync - preserving the spatial relationships as I move the clone around (no doubt there's a "position:relative;left=XX;top=YY" inserted somewhere). I'm sure there's a way to do what I'm thinking of, while keeping the two spans independent. But I'm not thinking of one. Does anyone have an idea? Thanks! Matt I posted this identical question to the jQuery forum as well.

    Read the article

  • Ruby ICalendar Gem: How to get e-mail reminders working.

    - by Jenny
    I'm trying to work out how to use the icalendar ruby gem, found at: http://icalendar.rubyforge.org/ According to their tutorial, you do something like: cal.event.do # ...other event properties alarm do action "EMAIL" description "This is an event reminder" # email body (required) summary "Alarm notification" # email subject (required) attendees %w(mailto:[email protected] mailto:[email protected]) # one or more email recipients (required) add_attendee "mailto:[email protected]" remove_attendee "mailto:[email protected]" trigger "-PT15M" # 15 minutes before add_attach "ftp://host.com/novo-procs/felizano.exe", {"FMTTYPE" => "application/binary"} # email attachments (optional) end alarm do action "DISPLAY" # This line isn't necessary, it's the default summary "Alarm notification" trigger "-P1DT0H0M0S" # 1 day before end alarm do action "AUDIO" trigger "-PT15M" add_attach "Basso", {"VALUE" => ["URI"]} # only one attach allowed (optional) end So, I am doing something similar in my code. def schedule_event puts "Scheduling an event for " + self.title + " at " + self.start_time start = self.start_time endt = self.start_time title = self.title desc = self.description chan = self.channel.name # Create a calendar with an event (standard method) cal = Calendar.new cal.event do dtstart Program.convertToDate(start) dtend Program.convertToDate(endt) summary "Want to watch" + title + "on: " + chan + " at: " + start description desc klass "PRIVATE" alarm do action "EMAIL" description desc # email body (required) summary "Want to watch" + title + "on: " + chan + " at: " + start # email subject (required) attendees %w(mailto:[email protected]) # one or more email recipients (required) trigger "-PT25M" # 25 minutes before end end However, I never see any e-mail sent to my account... I have even tried hard coding the start times to be Time.now, and sending them out 0 minutes before, but no luck... Am I doing something glaringly wrong?

    Read the article

  • HAProxy NGInx SSL setup

    - by Niclas
    I've been looking around different setups for a server cluster supporting SSL and I would like to benchmark my idea with you. Requirements: All servers in the cluster should be under the same full domain name. (http and https) Routing to subsystems is done on URI matching in HA proxy. All URIs have support for SSL support. Wish: Centralizing routing rules ---<----http-----<-- | | Inet -->HA--+---https--->NGInx_SSL_1..N | | +---http---> Apache_1..M | +---http---> NodeJS Idea: Configure HA to route all SSL traffic (mode=tcp,algorithm=Source) to an NGInx cluster turning https traffic into http. Re-pass the http traffic from NGInx to the HA for normal load-balancing which performs load balancing based on HA config. My question is simply: Is this the best way to to configure based on requirements above?

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >