Search Results

Search found 1709 results on 69 pages for 'environments'.

Page 57/69 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • What is the strangest programming language you have used?

    - by Anders Sandvig
    For me I think it has to be the scripting language of an old proprietary telephony platform I used in the early 2000s. The language itself was not so bad, but the fact that it was meant to be edited with a drag-and-drop GUI, which did not expose all the functionality I needed, was quite frustrating. I also remember having to manually implement many common functions, such as calculating the length of a string. Whenever I wanted to use "custom" or "advanced" functions, I had to edit the script files in a text editor, but as soon as I opened the files in the GUI again they were reformatted and restructured, which usually resulted in broken code. And, of course, this was an interpreted language, so I would not know it was broken until I actually ran it—oh, and did I mention that it did not run the same in the simulator as in the live environment? So, what is the strangest programming language or environment you have used, and why did you use it? Note that I'm interested in languages and environments that you have actually used for "real-world" situations, so Whitespace, Brainf***k and friends are not valid—unless you have used them for something "real", of course.

    Read the article

  • How can I set paperclip's storage mechanism based on the current Rails environment?

    - by John Reilly
    I have a rails application that has multiple models with paperclip attachments that are all uploaded to S3. This app also has a large test suite that is run quite often. The downside with this is that a ton of files are uploaded to our S3 account on every test run, making the test suite run slowly. It also slows down development a bit, and requires you to have an internet connection in order to work on the code. Is there a reasonable way to set the paperclip storage mechanism based on the Rails environment? Ideally, our test and development environments would use the local filesystem storage, and the production environment would use S3 storage. I'd also like to extract this logic into a shared module of some kind, since we have several models that will need this behavior. I'd like to avoid a solution like this inside of every model: ### We don't want to do this in our models... if Rails.env.production? has_attached_file :image, :styles => {...}, :storage => :s3, # ...etc... else has_attached_file :image, :styles => {...}, :storage => :filesystem, # ...etc... end Any advice or suggestions would be greatly appreciated! :-)

    Read the article

  • How to track a projects extraneous quirks

    - by Steerpike
    Hello, It's possible that the answer to this question may just be standard bug tracking software like jira or fogbugz, but I'm kind of hoping someone out there knows a better system for what I'm describing. My most current project is requiring a lot of setup quirkiness to get into a position where I can actually start a coding section. For example: A series of convoluted internal company commands before I can insitgate an SSH. Making sure any third party classes that make external calls have internal company proxy options setup - while also making sure these setting wont be set up when installed on a production environment Making sure the proxy is set before trying to install pear packages. Other similar things, mostly involving internal IT security and getting it to work with modules and packages. Individually none of these things is a huge deal, and I've written extensive notes to myself regarding exact commands and aditions I've made, but they're currently in a general text document and it's going to be hard to remember exactly where what I need is far down the line. We also have several new staff starting soon and I' rather give them an easier time of setting up their programming environments. Like I said, they aren't 'programming quirks' exactly, but just the constant fiddling that comes about before programming starts in earnest. Any thoughts on the best way to documents these things for my own and future generations sanity?

    Read the article

  • C++: Help with cin difference between Linux and Windows

    - by Krashman5k
    I have a Win32 console program that I wrote and it works fine. The program takes input from the user and performs some calculations and displays the output - standard stuff. For fun, I am trying to get the program to work on my Fedora box but I am running into an issue with clearing cin when the user inputs something that does not match my variable type. Here is the code in question: void CParameter::setPrincipal() { double principal = 0.0; cout << endl << "Please enter the loan principal: "; cin >> principal; while(principal <= 0) { if (cin.fail()) { cin.clear(); cin.ignore(INT_MAX, '\n'); } else { cout << endl << "Plese enter a number greater than zero. Please try again." << endl; cin >> principal; } } m_Parameter = principal; } This code works in Windows. For example, if the user tries to enter a char data type (versus double) then the program informs the user of the error, resets cin, and allows the user another opportunity to enter a valid value. When I move this code to Fedora, it compiles fine. When I run the program and enter an invalid data type, the while loop never breaks to allow the user to change the input. My questions are; how do I clear cin when invalid data is inputted in the Fedora environment? Also, how should I write this code so it will work in both environments (Windows & Linux)? Thanks in advance for your help!

    Read the article

  • Are there concurrency problems when using -performSelector:withObject:afterDelay: ?

    - by mystify
    For example, I often use this: [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:someDelay]; Now, lets say I call this 10 times to perform at the exact same delay, like: [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; - (void)doSomethingAfterDelay:(id)someObject { /* access an array, read stuff, write stuff, do different things that would suffer in multithreaded environments .... all operations are nonatomic! */ } I have observed pretty strange behavior when doing things like this. For my understanding, this method schedules a timer to fire on the current thread, so in this case the main thread. But since it doesn't create new threads, it actually should not be possible to run into concurrency problems, right?

    Read the article

  • Are there concurrency problems when using -performSelector:withObject:afterDelay: ?

    - by mystify
    For example, I often use this: [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:someDelay]; Now, lets say I call this 10 times to perform at the exact same delay, like: [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; [self performSelector:@selector(doSomethingAfterDelay:) withObject:someObject afterDelay:2.0]; - (void)doSomethingAfterDelay:(id)someObject { /* access an array, read stuff, write stuff, do different things that would suffer in multithreaded environments .... all operations are nonatomic! */ } I have observed pretty strange behavior when doing things like this. For my understanding, this method schedules a timer to fire on the current thread, so in this case the main thread. But since it doesn't create new threads, it actually should not be possible to run into concurrency problems, right?

    Read the article

  • Managing My Database in Source Control

    - by Jason
    As I am working with a new database project (within VS2008), and as I have never developed a database from scratch, I immediately began looking into how to manage a database within source control (in this case, Subversion). I found some information on SO, including this post: Keeping development databases in multiple environments in sync. One of the answers in particular pointed to a number of a links, all of which had good, useful information. I was reading a series of posts by K. Scott Allen which describe how he manages database change. From my reading (and please pardon the noobishness of my question), it seems as though the database itself is never checked into a repository. Rather, scripts that can build the database, along with test data (which is also populated from scripts) is checked into the repository. Ultimately, this means that, when a developer is testing his or her app, these scripts, which are part of the build process, are run. This ensures that the database is up-to-date, but is also run locally from every developer's machine. This makes sense to me (if I am indeed reading that correctly). However, if I am missing something, I would appreciate correction or additional guidance. In addition, another question I wanted to ask - does this also mean that I should NOT check in the mdf or ldf files that are created from Visual Studio? Thanks for any help and additional insight. Always appreciated.

    Read the article

  • PHP dynamic Page-level DocBlocks

    - by Obmerk Kronen
    I was wondering if there is a way to interact with the Page-level DocBlocks. My question is more specifically about wordpress plugin development, but this question has arised also in a non-wordpress environments. The reason is mainly the possibility to easily change VERSIONS and names throughout a large project with maybe a constant definition - but that will reflect also in the docblock.. The following example Docblock is from a wordpress plugin I write - /* Plugin Name: o99 Auxilary Functions v0.4.7 Plugin URI: http://www.myurl.com Description: some simple description that nobody reads. Version: 0.4.7 Author: my cool name Author URI: http://www.ok-alsouri.com */ Is there a way to transform it into : $ver = '0.4.7'; $uri = 'http://www.myurl.com'; $desc = 'some simple description that nobody reads.'; $mcn = 'my cool name'; etc.. etc.. /* Plugin Name: o99 Auxilary Functions ($ver) Plugin URI: ($uri) Description: ($desc) Version: ($ver) Author: ($mcn) Author URI: ($$uri) */ obviously for echo to work I would need to break the docblock itself, and I can not WRITE the docblock directly into it´s own file . In shorts : can I "generate" a docblock with php itself somehow (I would think that the answer is - "no" for the page itself.. But maybe I am wrong and someone has some neat hack :-) ) Is that even possible ?

    Read the article

  • PHP Mail() not working on remote server

    - by Amaerth
    I am developing an application and have been testing the mail() function in PHP. The following works just fine on my local machine to send emails to myself, but as soon as I try to send it from the testing environment to my local machine, it silently fails. I will still get the "Mail Sent" message, but no message is sent. I turned on the mail logging in the php.ini file, but even that doesn't seem to be populated after I refresh the page. Again, the .php files and php.ini files are identical in both environments. Port 25 has been opened on the testing environment, and we are using a Microsoft Exchange server. <?php $to = "[email protected]"; $subject = "Test mail"; $message = "Hello! This is a simple email message."; $from = "[email protected]"; $headers = "From:" . $from; mail($to,$subject,$message,$headers); echo "Mail Sent."; ?> SMTP area of the php.ini file: [mail function] ; For Win32 only. ; http://php.net/smtp SMTP = exhange.server.org ; http://php.net/smtp-port smtp_port = 25 ; For Win32 only. ; http://php.net/sendmail-from sendmail_from = [email protected]

    Read the article

  • Java vs. C variant for desktop and tablet development

    - by MirroredFate
    I am going to write a desktop application, but I am conflicted concerning which language to use. It (the desktop application) will need to have a good GUI, and to be extendable (hopefully good with modules of some sort). It must be completely cross-platform, including executable in various tablet environments. I put this as a requirement while realizing that some modification will no doubt be necessary. The language should also have some form of networking tools available. I have read http://introcs.cs.princeton.edu/java/faq/c2java.html and understand the differences between Java and C very well. I am looking not necessarily at C, but more at a C variant. If it is a complete toss-up, I will use Java as I know Java much better. However, I do not want to use a language that will be inferior for the task I wish to accomplish. Thank you for all suggestions and explanations. NOTE: If this is not the correct stack for this question, I apologize. It seemed appropriate according to the rules.

    Read the article

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

  • Trac problem: AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule"

    - by Janosch
    This problem already has been described mutliple times in different mailing lists, but no solution has yet been published. My original setup is as follows (but in the mean time i have a simpler one on Windows 7): Ubuntu server with apache 2.2 and python 2.7 virutal python environment created with virtualenv installed babel, genshi and trac in this order using pip in the virtual environment Trac seems to run fine with tracd, but when visiting it through apache, i get the following error in an official trac error page: AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule" The stacktrace looks like this: Traceback (most recent call last): File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/web/main.py",line 473, in _dispatch_request dispatcher.dispatch(req) File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/web/main.py", line 154, in dispatch chosen_handler = self.default_handler File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/config.py", line 691, in __get__ self.section, self.name)) AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule". Please update the option trac.default_handler in trac.ini. I already tried a lot to get down to the root of the problem, for me it looks as if all nativ trac components refuse to load. When one explicitely imports these components in the wsgi handler, some of them start to work somehow. Since i suspected the virtual environment, i dropped it, and manually copied all dependencies (babel, genshi, trac, ..) to one directory, and added this directory to system.path in the wsgi handler. I get exactly the same error. Since this setup is now independend from the environment, one can easily try it out on any other machine (windows or linux) running apache 2 and python 2.7. On my Windwos 7 machine, i got exactly the same problem. I zipped together the whole bundle, one can download it from http://www.xterity.de/tmp/trac-installation.zip . In the apache configuration (Windows 7 machine) i use the following settings: Alias /trac/chrome/common "D:/workspace/trac-installation/trac-resources/common" Alias /trac/chrome/site "D:/workspace/trac-installation/trac-resources/site" WSGIScriptAlias /trac "D:/workspace/trac-installation/apache/handler.wsgi" <Directory "D:/workspace/trac-installation/trac-resources"> Order allow,deny Allow from all </Directory> <Directory "D:/workspace/trac-installation"> Order allow,deny Allow from all </Directory> <Location "/trac"> Order allow,deny Allow from all </Location> And my handler.wsgi looks like this: import os import sys sys.path.append('D:/workspace/trac-installation/dependencies/') os.environ['TRAC_ENV'] = 'D:/workspace/trac-installation/trac-environments/Esp004' os.environ['PYTHON_EGG_CACHE'] = 'D:/workspace/trac-installation/eggs' import trac.web.main application = trac.web.main.dispatch_request Has anybody got an idea what could be the problem, or how to find out where it comes from?

    Read the article

  • PHP: gethostbyname() suddenly no longer resolves names to IPs when run in Apache

    - by hurikhan77
    One of our older legacy servers which gets no further updates or reconfigurations suddenly stopped resolving hostnames to IPs when PHP is executed within Apache. However, it still works fine when executed from the CLI. From the RSS caches last modification time, I deduce that it stopped working on around Mar, 28th. To reproduce the problem, I created a script using fsockopen() and it said "connection failed (errno 2)". I further reduced the problem to being related with a failed name resolution: <?php $addr = gethostbyname("twitter.com"); echo "ADDR($addr)"; ?> When I run this through Apache, the output is ADDR(twitter.com), which is wrong. When I run this from the CLI, the output is ADDR(aaa.bbb.ccc.ddd) with varying IP addresses, as expected. Nothing on the server setup has changed. CLI and Apache module share the same php.ini. PHP is version v4.4.9 with Zend Optimizer v2.5.10. Apache is v1.3.31. I know the versions are old. But since nothing has been changed, a solution like "try to upgrade versions first" is no solution as the server's feature set/versioning is frozen and will be replaced soon. Still we need a solution. If I run dig through the script, it works in both environments (mod_php and CLI) but this is more than an ugly hack as it would involve many edits and testing throughout the whole script base which is also undesired as the PHP application on the server is frozen, too, and only receives security updates. It will be replaced by a complete rewrite (on the new server). But as the rewrite will take some time and successive replace parts of the legacy application, we need a fix for the resolver problem. I already googled a bit and while the problem is known, many did not find a fix. The fix to raise memory limits did not work. Restarts did not work. The resolver in mod_php just did stop working for no apparent reason. :-(

    Read the article

  • Performance of ClearCase servers on VMs?

    - by Garen
    Where I work, we are in need of upgrading our ClearCase servers and it's been proposed that we move them into a new (yet-to-be-deployed) VMmare system. In the past I've not noticed a significant problem with performance with most applications when running in VMs, but given that ClearCase "speed" (i.e. dynamic-view response times) is so latency sensitive I am concerned that this will not be a good idea. VMWare has numerous white-papers detailing performance related issues based on network traffic patterns that re-inforces my hypothesis, but nothing particularly concrete for this particular use case that I can see. What I can find are various forum posts online, but which are somewhat dated, e.g.: ClearCase clients are supported on VMWare, but not for performance issues. I would never put a production server on VM. It will work but will be slower. The more complex the slower it gets. accessing or building from a local snapshot view will be the fastest, building in a remote VM stored dynamic view using clearmake will be painful..... VMWare is best used for test environments (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) and: VMware + ClearCase = works but SLUGGISH!!!!!! (windows)(not for production environment) My company tried to mandate that all new apps or app upgrades needed to be on/moved VMware instances. The VMware instance could not handle the demands of ClearCase. (come to find out that I was sharing a box with a database server) Will you know what else would be on that box besides ClearCase? Karl (via http://www.cmcrossroads.com/forums?func=view&id=44094&catid=31) and: ... are still finding we can't get the performance using dynamic views to below 2.5 times that of a physical machine. Interestingly, speaking to a few people with much VMWare experience and indeed from running builds, we are finding that typically, VMWare doesn't take that much longer for most applications and about 10-20% longer has been quoted. (via http://www.cmcrossroads.com/forums?func=view&catid=31&id=44094&limit=10&start=10) Which brings me to the more direct question: Does anyone have any more recent experience with ClearCase servers on VMware (if not any specific, relevant performance advice)?

    Read the article

  • Python install issue on Mac OS X

    - by Michael Waterfall
    I have been using the standard python that comes with OS X Lion (2.7.2) but I wanted to build a UCS-4 version to handle 4-byte unicode characters better. I had already installed pip and packages like pytz, virtualenv and virtualenvwrapper, etc., and these are installed in /Library/Python/2.7/site-packages. My $PATH is /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin. To build a new version of python on the machine (outside of any project specific virtual environments, that will come later), I followed the instructions on this article and managed to build it in /usr/local/bin. The problem is that when I launched a new bash window, I got the following virtualenvwrapper error: Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named virtualenvwrapper.hook_loader virtualenvwrapper.sh: There was a problem running the initialization hooks. If Python could not import the module virtualenvwrapper.hook_loader, check that virtualenv has been installed for VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python and that PATH is set properly. The instructions said to move /usr/local/bin to the top of the /etc/paths file, and since then I've noticed some strange issues. I installed pip into /usr/local/bin and now I have assumed that since I'm working in /usr/local/bin, and the newly installed python's site packages is now located in /usr/local/lib/python2.7/site-packages, when I do pip freeze, it should be empty as nothing is installed there yet. However, pip freeze still reports things installed in the old (OS X) site-packages folder. Here's some info after the build: $ which python /usr/local/bin/python $ which pip /usr/local/bin/pip $ echo $PATH /usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin When I uninstall a python package with pip, it removes it from the old site-packages folder as expected. When I install it again, instead of installing it in /usr/local/lib/python2.7/site-packages, it installs it in /Library/Python/2.7/site-packages (verified by attempting to install it again and receiving Requirement already satisfied (use --upgrade to upgrade): pytz in /Library/Python/2.7/site-packages ). How is it getting that path for the old site-packages folder? Why won't it install it in the correct location for the python install it's using? I'm getting several other issues since promoting /usr/local/bin but I think if I understand this I'll be able to get somewhere. Can anyone see what's happening? If you need any more info I'll be happy to provide it.

    Read the article

  • How can Windows XP/7 users cleanly connect to Mac OS X Server 10.9.4 Mavericks with Active Directory integration?

    - by JakeGould
    I’m a Linux/Unix systems admin who also manages a Macintosh server infrastructure & there is a lone Mac Mini in the mix running 10.9.4 that I would like Windows XP & Windows 7 users to connect to with little or no hassle. The problem? Windows users can’t seem to even get to the point of a password prompt yet connect. Mind you this server replaced a Mac OS X 10.6.8 server that had issues, but never had issues with Windows users connected. The gist of this post is: The tons of different messages out there about Mac OS X 10.9.4 Samba support are mind-numbingly confusing. Can anyone share some solid specifics here? I’ve read pieces like this one here that suggest turning off file sharing & then adding a share with AFP/SMB enabled would work. But the suggestion seems to apply to 10.8. And from what I know a lot has changed in Samba support in 10.9 let alone the iterations to 10.9.4. Then I found this great tutorial here that explains things step-by-step. Which seems like it should work, but the problem is the example given applies to a local user created on the Mac when I would like users in an Active Directory group—which the Mac is bound to—access the Mac Mini shares. There are also tons of great tips here on MacWindows.com but nothing seems solid to the issue I am facing. So from what I am reading these are my options: Local User Versus Active Directory: Setup a common local user on the Mac OS X 10.9.4 server to be used for Samba sharing since Active Directory won’t work. Is this really the case? Because loss of AD integration is a major pain. Do Extended File Attributes Get Retained from Windows Users: If this were to work, how do extended attributes come into play? Loss of metadata & related info is not an option. How Fragile is Any of this to Updates: How does any of this shake out with Mac OS X updates as well as Windows updates? Installing Official, Open Source Samba: Would upgrading the Samba install on the server to the official open source Samba via a package like SMBUp or via the Hombrew method described here help or make the issue worse? I fully understand there have historically been issues in mixed environments, but nowadays Windows users connecting to a Mac seem to have a truly hellish road ahead of them. Unless I am missing something?

    Read the article

  • What are the pros and cons of AWS Elastic Beanstalk compared with other deployment strategies?

    - by James van Dyke
    I'm pretty new to the whole Netflix OSS stack and deployments in general. As a background for my current level of knowledge ops-wise, my main role is as a front-end application engineer. However, I enjoy the operations side of things, so I'm attempting to setup a new deployment strategy and the tooling for a new project. Our Goals Super easy deploys (we want to push a button to update production) Automated deploys to test environments (using Jenkins) Ease of maintenance (we have an app to write, don't want to spend our time fiddling with production issues) Ability to handle a service oriented architecture (many small apps, various languages and data stores) Enough flexibility to ensure we won't have to change strategies any time soon (we're already trying to get away from RightScale) We're OK with a little more initial setup time if doing so will save us some headaches in the future. So, along these lines, I've been listening to podcasts, watching Ops talks, and reading tons of blog posts and based on our goals and what I've taken to be some forming best practices, we've started forming a plan using Asgard, rolling our package into a jar and rolling that into an AMI. We had this all planned out and like the advantages the process versus using a Chef server and converging instances on the fly (we felt this was error prone given our limited timeline and lack of understanding around a Chef server workflow). However, a coworker did a little looking around on his own and felt like Elastic Beanstalk met our needs. I've looked into it and spun up a test environment with a WAR file and an attached RDS database. Things seem to work and I believe that we can automate deploys to a testing environment using Jenkins via the AWS API. Seems simple enough... perhaps too simple. What I'm wondering is, what's the catch? If Elastic Beanstalk is so simple and effective, why isn't it talked about more? I'm having a hard time finding enough objective opinions and facts about the two different deployment strategies, so I thought I'd ask around. Do you use Elastic Beanstalk? If so, why and what factors lead to that decision? What do you like and dislike? If you don't use Elastic Beanstalk but considered it, what do you use and why didn't you use Elastic Beanstalk? What are the advantages and disadvantages to a Elastic Beanstalk based deployment strategy for an SOA? That is, will Elastic Beanstalk work well with many small applications that rely on each other to work?

    Read the article

  • amd gpu but display on intel integrated graphics

    - by pitseeker
    On my Ubuntu 12.04 I connected my monitor to the onboard intel graphics. I'd like to use my ati radeon 6770 for opencl tasks (e.g. bitcoin mining). So far I couldn't figure out how to get the ati driver working. When calling "aticonfig --initial -f" it always writes a new xorg.conf that ignores the intel graphics. At boot time it works only when I attached the monitor to the ati card. So I manually tampered with the xorg.conf and got this: Section "ServerLayout" Identifier "Default Monitor" Screen 0 "myscreen" 0 0 Screen 1 "deadscreen" RightOf "myscreen" EndSection Section "Module" EndSection Section "Monitor" Identifier "Default Monitor" Option "VendorName" "Monitor Vendor" Option "ModelName" "Monitor Name" Option "DPMS" "true" EndSection Section "Monitor" Identifier "null Monitor" Option "Enable" "false" EndSection Section "Device" Identifier "Intel Integrated Graphics" Driver "intel" BusID "PCI:0:2:0" Screen 0 EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" Screen 1 EndSection Section "Screen" Identifier "myscreen" Device "Intel Integrated Graphics" Monitor "Default Monitor" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "deadscreen" Device "aticonfig-Device[0]-0" Monitor "null Monitor" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection I think this might be the right way since I see that X tries to start both drivers in /var/log/Xorg.0.log. However the fglrx driver seems crash (end of xorg.0.log): Backtrace: [ 6.625] 0: /usr/bin/X (xorg_backtrace+0x26) [0x7fb5cd41b846] [ 6.625] 1: /usr/bin/X (0x7fb5cd293000+0x18c6ea) [0x7fb5cd41f6ea] [ 6.625] 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7fb5cc5b9000+0xfcb0) [0x7fb5cc5c8cb0] [ 6.625] 3: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/drivers/fglrx_drv.so (xdl_xs111_atiddxGetGPUMapInfo+0x1b1) [0x7fb5c88e16b1] [ 6.625] 4: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/drivers/fglrx_drv.so (atiddxGetGPUMapInfo+0xd) [0x7fb5c87bcc0d] [ 6.625] 5: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/extensions/libglx.so (0x7fb5ca12d000+0x1ab29) [0x7fb5ca147b29] [ 6.625] 6: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/extensions/libglx.so (0x7fb5ca12d000+0x1cf8c) [0x7fb5ca149f8c] [ 6.625] 7: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/extra-modules.dpkg-tmp/modules/extensions/libglx.so (0x7fb5ca12d000+0x1ee55) [0x7fb5ca14be55] [ 6.626] 8: /usr/bin/X (InitExtensions+0x99) [0x7fb5cd350069] [ 6.626] 9: /usr/bin/X (0x7fb5cd293000+0x3d605) [0x7fb5cd2d0605] [ 6.626] 10: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0xed) [0x7fb5cb44e76d] [ 6.626] 11: /usr/bin/X (0x7fb5cd293000+0x3daad) [0x7fb5cd2d0aad] [ 6.626] Segmentation fault at address 0x14 [ 6.626] Caught signal 11 (Segmentation fault). Server aborting [ 6.626] I'd be very happy if someone can give me a hint on how to configure my ATI card while using the integrated graphics for display. Update I used most of jjhughes57 config and successfully booted the X server on intel (keyboard layout is changed though, funnily). Unfortunately the 2nd X server (fglrx) doesn't fully start. It shuts itself down right after starting [ 6.265] (II) fglrx(0): Restoring Recent Mode via PCS is not supported in RANDR 1.2 capable environments [ 6.296] (II) UnloadModule: "mouse" [ 6.296] (II) Unloading mouse [ 6.296] (II) UnloadModule: "kbd" [ 6.296] (II) Unloading kbd [ 6.298] (II) fglrx(0): Shutdown CMMQS [ 6.298] (II) fglrx(0): [uki] removed 1 reserved context for kernel [ 6.298] (II) fglrx(0): [uki] unmapping 8192 bytes of SAREA 0x2000 at 0x7fbef8209000 [ 6.337] (II) fglrx(0): Interrupt handler Shutdown. [ 6.470] ddxSigGiveUp: Closing log [ 6.470] Server terminated successfully (0). Closing log file. Thanks for any hints what is wrong here.

    Read the article

  • Can Vagrant point to a directory of Puppet manifests for execution?

    - by SeligkeitIstInGott
    I am using Vagrant to jump start some initial Puppet config and am confused on how to include/run multiple manifests (other than just site.pp) in the puppet execution workflow without making the extra manifests into modules and including them that way. In the puppet manifests directory that I point Vagrant to (see below) I have two manifests that I want executed: site.pp and hierasetup.pp. config.vm.provision "puppet" do |puppet| puppet.manifests_path = "puppet_files/manifests" puppet.module_path = "puppet_files/modules" puppet.manifest_file = "site.pp" puppet.options = "--verbose --debug" end Currently I am having site.pp be the manifest that calls hierasetup.pp. My site.pp looks like this: File { owner => 'root', group => 'root', mode => '0644', } import "hierasetup.pp" include jboss But I get this error about the deprecation of "import": Warning: The use of 'import' is deprecated at /tmp/vagrant-puppet-1/manifests/site.pp:33. See http://links.puppetlabs.com/puppet-import-deprecation (at grammar.ra:610:in `_reduce_190') According to the referenced URL under "Things to try instead" it says "To keep your node definitions in separate files, specify a directory as your main manifest". Further this puppet doc on main manifests says: "Recommended: If you’re using the main manifest heavily instead of relying on an ENC, consider changing the manifest setting to $confdir/manifests. This lets you split up your top-level code into multiple files while avoiding the import keyword. It will also match the behavior of simple environments." It appears that Puppet can reference an entire directory instead of just a specific manifest file, such that I would expect that Vagrant would make a provision for this and allow me to drop the "puppet.manifest_file = "site.pp" line and point to the parent directory instead in which all the *.pp files there will be executed. However removing that line in Vagrant merely generates a complaint about an expected "default.pp" in its stead: puppet provisioner: * The configured Puppet manifest is missing. Please specify a path to an existing manifest: /some/path/puppet_files/manifests/default.pp So: Firstly, do I understand the "new" (non-import) way of calling multiple manifests correctly, in that a directory is to be pointed to in which all the *.pp files inside it will be executed? And secondly, has Vagrant "caught up" with this new change to accommodate the referencing of directories in conjunction with Puppet's deprecation of "import"? Update: Thanks to Shane the issue with #2 (Vagrant's code not being caught up to allow pointing to puppet manifest directories) was reported on Vagrant's GitHub issue tracker site and has since been patched: https://github.com/mitchellh/vagrant/issues/4169

    Read the article

  • Automating silent software deployments on Solaris 10

    - by datSilencer
    Hello everyone. Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10. Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example: Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations. Checking that target application folders exist and if not, then create them with preconfigured path values defined when the package was assembled. Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them. Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system. Obviously, doing this for many boxes (the goal in question) by hand can be slow and error prone. I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another. 1) Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort. 2) Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P. 3) Ant or Gradle scripts. They may be an option since the boxes also have java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting. It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this. I thank you for your time and help.

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment

    - by user69508
    (Please forgive if this is posted in an incorrect forum. We didn’t know exactly where to post it.) We have an ASP.NET Web API single page application - a browser-based app running in IIS to serve up HTML5/CSS3/JavaScript, which talks to the ASP.NET Web API endpoint only to access a database and transfer JSON data. Everything is working great in our development environment - that is, we have one Visual Studio solution with an ASP.NET Web API project and two class library projects for data access. While development and testing on development boxes, using IIS Express to a localhost:port to run the site and access the Web API, everything is fine. Now we need to move it to a production environment (and we’re having problems - or just not understanding what needs to be done). The production environment is all internal (nothing will be exposed on the public Internet). There are two domains. One domain, the corporate domain, is where all users login normally. The other domain, the process domain, contains the SQL Server instance that our app and Web API will need to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. So, I guess what they want is: corp domain (end users) <– firewall (open port 80) <– DMZ (web server running IIS) <– firewall (open port 80 or 1433????) <– process domain (IIS for Web API and SQL Server) We’re developers and don’t really understand all the networking aspects, so we’re wondering how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code (HTML5/CSS3/JavaScript/images/etc.) is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Or, does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to "http://server/appname/api/getitmes"? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? I’m sure there are other questions, but we’ll start with these. Thanks!!! (Note: the servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.)

    Read the article

  • MySQL performance over a (local) network much slower than I would expect

    - by user15241
    MySQL queries in my production environment are taking much longer than I would expect them too. The site in question is a fairly large Drupal site, with many modules installed. The webserver (Nginx) and database server (mysql) are hosted on separated machines, connected by a 100mbps LAN connection (hosted by Rackspace). I have the exact same site running on my laptop for development. Obviously, on my laptop, the webserver and database server are on the same box. Here are the results of my database query times: Production: Executed 291 queries in 320.33 milliseconds. (homepage) Executed 517 queries in 999.81 milliseconds. (content page) Development: Executed 316 queries in 46.28 milliseconds. (homepage) Executed 586 queries in 79.09 milliseconds. (content page) As can clearly be seen from these results, the time involved with querying the MySQL database is much shorter on my laptop, where the MySQL server is running on the same database as the web server. Why is this?! One factor must be the network latency. On average, a round trip from from the webserver to the database server takes 0.16ms (shown by ping). That must be added to every singe MySQL query. So, taking the content page example above, where there are 517 queries executed. Network latency alone will add 82ms to the total query time. However, that doesn't account for the difference I am seeing (79ms on my laptop vs 999ms on the production boxes). What other factors should I be looking at? I had thought about upgrading the NIC to a gigabit connection, but clearly there is something else involved. I have run the MySQL performance tuning script from http://www.day32.com/MySQL/ and it tells me that my database server is configured well (better than my laptop apparently). The only problem reported is "Of 4394 temp tables, 48% were created on disk". This is true in both environments and in the production environment I have even tried increasing max_heap_table_size and Current tmp_table_size to 1GB, with no change (I think this is because I have some BLOB and TEXT columns).

    Read the article

  • Aggregate SharePoint Event/Items into your Calendar view using Calendar Overlay

    - by eJugnoo
    One of the most common features I have seen in common use for SharePoint (prior to 2010) in Intranet environments for Team site is Calendar’s. Not only the Calendar list type, but also the ability to add a Calendar view to any list that has the desired columns to construct a Calendar – such as Start, End, Title etc. While this was all great for a single site/calendar, the problem of having to track numerous calendar’s remained. With introduction of Outlook 2007 bi-directional integration with SharePoint, and particularly the ability of Outlook to overlay calendar helped bridge the gap. Now one could connect to number of team sites, and setup Calendar overlays in Outlook using varying colours, to easily identify event source and yet benefit from the plotting of events on single Calendar view. This was all good, but each user in your Enterprise was supposed to setup in a “pull” fashion. This is good for flexibility, not so good when you need to “push” consistency and productivity (re-use). So, what was missing on SharePoint is the ability to have server-side overlay’s that everyone can see – in a single place, aggregating multiple sources. Until SharePoint 2010 arrived! Calendars Overlay in SharePoint 2010 There are Calendar lists and Calendar views. View can be created for almost all lists, as far as you have desired column’s in a list like Start, End, Title etc. to be able to describe and plot an item in a Calendar format. In SharePoint 2010, create a new Calendar list. Go to Calendar ribbon tab, and click Calendar Overlay. You get the screen with list of existing Overlay’s associated with current Calendar (list – in our case). Click on “New Calendar”… Notice the breadcrumb! You are adding Overlay to existing list (Team Calendar – in our case). You have choice of “pulling” Calendar info from an existing Calendar (list/view) in SharePoint or even from Exchange! Set standard info like a name, description and decide the colour you want for the items in aggregated Calendar overlay. Select the source site/list/view, anywhere in farm. When you select Exchange as source of Calendar, you get option to add OWA and Exchange Web Service url. I will cover details of connecting with Exchange in another post, and focus on Overlay’s with SharePoint for this one. Once you have added a new Calendar overlay to existing Calendar veiw, you get something like below for Day view, Week view, and Month view respectively Notice the Overlay colours: Now, if you decide to connect this Calendar to Outlook to sync the items, it will only sync items from main view, and not from Overlay source. So such Overlay of calendar’s is server-side aggregation only. That increases my curiosity, so I try adding the Calendar list view as a web-part on a new page. As you see, this instance of view didn’t include item from source that we had added to default Calendar view. This is – probably – due to the fact that this is a new web-part view for the page. If you want to add overlay to this one, you have to redo that from Ribbon. This also means, subject to purpose and context you get the flexibility to decide what overlay is suited. Also you can only add 10 Overlay’s to an existing view instance. Conclusion Calendar Overlay is clearly a very useful feature that fills a gap of not being able to aggregate information from multiple sources into a Calendar view within context of current items. Source of items can be existing SharePoint calendar views on any site, or even Exchange (via OWA/Exchange web services). List type for source doesn’t matter, it just need a Calendar view type available. You can have 10 overlays. Overlays are for the specific view only, and are server-side only – which means they do not get synced in Outlook. While you can drag-drop current list items, you cannot edit overlay items as they are read-only within scope of current Calendar view. You can of course click on source Overlay item to edit at the source. I’d like to hear, how you think Overlay’s will help you in your case, or how you are already using them... Enjoy SharePoint! --Sharad

    Read the article

  • How To Configure Remote Desktop To Hyper-V Guest Virtual Machines

    - by Brian Jackett
    Configuring Remote Desktop (RDP) from a host Hyper-V machine to a guest virtual machine can be tricky, so this post is dedicated to the issues and resolution steps I went through to allow RDP.  Cutting to the point, below are the things to look for followed by some explanation about my scenario if you care to read.  This is not an exhaustive list of what is required, just the items that were causing problems for my particular scenario. Requirements Allow Remote Desktop Connections in guest OS. The network adapter type must allow communication with host machine (e.g. use an “Internal” virtual adapter.) If running Server 2008 R2 on guest, network discovery mode must be turned on. If running Server 2008 R2 on guest, the services supporting network discovery mode must be running: - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host My Environment     A quick word about my environment.  I am running Windows Server 2008 R2 with Hyper V on my laptop and numerous guest VMs running Windows Server 2003 R2 or Windows Server 2008 R2.  I run a domain controller VM and then 1 or 2 SharePoint servers depending on my work needs.  I’ve found this setup to work well except when it comes to the display window for my VMs. The Issue     Ever since I began running Hyper-V I haven’t been able to RDP to my guest VMs which means the resolution for my connection windows ha been limited to what the native Hyper-V connections allow.  During personal use I can put the resolution up to 1152 x 864, but during presentations I am usually limited to a measly 800 x 600.  That is until today when I decided to fully investigate why I couldn’t connect via RDP.     First a thank you to John Ross (@johnrossjr), Christina Wheeler (@cwheeler76) and Clayton Cobb (@warrtalon) for various suggestions while I was researching tonight.  As it turns out I had not 1, not 2, but 3 items preventing me from using RDP.  Let’s dig into the requirements above. Allow RDP Connection     This item I had previously taken care of, but it bears repeating because by default Windows Server 2008 R2 does not allow RDP connections.  Change the setting from “Don’t allow…” to whichever “Allow connections…” setting suits your needs.  I chose the less secure option as this is just my dev laptop. Network Adapter Type     When I originally configured my VMs I configured each to use 2 network adapters: one using the physical ethernet adapter for internet use and a virtual private adapter for communication between the VMs.  The connection for the ethernet adapter is an "”External” adapter and thus doesn’t connect between the host and guest.  The virtual private adapter allowed communication ONLY between the VMs and not to my host.  There is a third option “Internal” which allows communication between VMs as well as to the host.  After finding out this distinction I promptly created an Internal network adapter and assigned that to my VMs. Turn On Network Discovery     Seems like a pretty common sense thing, but in order to allow remote desktop connections the target computer must able to be found by the source computer (explained here.)  One of the settings that controls if a computer can be found on the network is aptly named Network Discovery.  By default Windows Server 2008 R2 turns Network Discovery off for security purposes.  To enable it open up the Network and Sharing Center.  Click “Change Advanced Sharing Settings” on the left.  On the following screen select “Turn on network discovery” for the currently used profile and click Save Settings.  You may notice though that your selection to turn on network discovery doesn’t save.  If this is the case then you most likely don’t have the supporting services running (as was my case.) Network Discovery Supporting Services     There are a total of 4 services (listed again below) that need to be running before you can turn on network discovery (explained here.)  The below images highlight these services.  In my guest VM I found that I had DNS Client already running while the other 3 were disabled.  I set them all to enabled and started the ones that were stopped.  After this change I returned to the Sharing settings screen and found that Network Discovery was turned on.  I’m not sure whether this was picking up my attempt to turn it on previously or if starting those services turned it on.  Either way the end result was a success. - DNS Client - Function Discovery Resource Publication - SSDP Discovery - UPnP Device Host Before and After Results     The first image is the smaller square shaped viewing window used by the Hyper-V native connection.  The second is the full-screen RDP connection in all its widescreen glory. Conclusion     Over the past few months I’ve found Hyper-V to be very useful for virtualizing my development environments, but I’ve also had a steep learning curve to get various items configured just right.  Allowing RDP connections to guest VMs was one area that I hadn’t been able to get right for the longest time.  Now that I resolved these issues I hope that others can avoid the pitfalls that I ran into.  If you know of any other items I left off feel free to let me know.        -Frog Out   Links Turning on Network Discovery http://sqlblog.com/blogs/john_paul_cook/archive/2009/08/15/remote-desktop-connection-on-windows-server-2008-r2.aspx Services required for Network Discovery http://social.technet.microsoft.com/Forums/en-US/winservergen/thread/2e1fea01-3f2b-4c46-a631-a8db34ed4f84

    Read the article

  • Parallelism in .NET – Part 13, Introducing the Task class

    - by Reed
    Once we’ve used a task-based decomposition to decompose a problem, we need a clean abstraction usable to implement the resulting decomposition.  Given that task decomposition is founded upon defining discrete tasks, .NET 4 has introduced a new API for dealing with task related issues, the aptly named Task class. The Task class is a wrapper for a delegate representing a single, discrete task within your decomposition.  We will go into various methods of construction for tasks later, but, when reduced to its fundamentals, an instance of a Task is nothing more than a wrapper around a delegate with some utility functionality added.  In order to fully understand the Task class within the new Task Parallel Library, it is important to realize that a task really is just a delegate – nothing more.  In particular, note that I never mentioned threading or parallelism in my description of a Task.  Although the Task class exists in the new System.Threading.Tasks namespace: Tasks are not directly related to threads or multithreading. Of course, Task instances will typically be used in our implementation of concurrency within an application, but the Task class itself does not provide the concurrency used.  The Task API supports using Tasks in an entirely single threaded, synchronous manner. Tasks are very much like standard delegates.  You can execute a task synchronously via Task.RunSynchronously(), or you can use Task.Start() to schedule a task to run, typically asynchronously.  This is very similar to using delegate.Invoke to execute a delegate synchronously, or using delegate.BeginInvoke to execute it asynchronously. The Task class adds some nice functionality on top of a standard delegate which improves usability in both synchronous and multithreaded environments. The first addition provided by Task is a means of handling cancellation via the new unified cancellation mechanism of .NET 4.  If the wrapped delegate within a Task raises an OperationCanceledException during it’s operation, which is typically generated via calling ThrowIfCancellationRequested on a CancellationToken, or if the CancellationToken used to construct a Task instance is flagged as canceled, the Task’s IsCanceled property will be set to true automatically.  This provides a clean way to determine whether a Task has been canceled, often without requiring specific exception handling. Tasks also provide a clean API which can be used for waiting on a task.  Although the Task class explicitly implements IAsyncResult, Tasks provide a nicer usage model than the traditional .NET Asynchronous Programming Model.  Instead of needing to track an IAsyncResult handle, you can just directly call Task.Wait() to block until a Task has completed.  Overloads exist for providing a timeout, a CancellationToken, or both to prevent waiting indefinitely.  In addition, the Task class provides static methods for waiting on multiple tasks – Task.WaitAll and Task.WaitAny, again with overloads providing time out options.  This provides a very simple, clean API for waiting on single or multiple tasks. Finally, Tasks provide a much nicer model for Exception handling.  If the delegate wrapped within a Task raises an exception, the exception will automatically get wrapped into an AggregateException and exposed via the Task.Exception property.  This exception is stored with the Task directly, and does not tear down the application.  Later, when Task.Wait() (or Task.WaitAll or Task.WaitAny) is called on this task, an AggregateException will be raised at that point if any of the tasks raised an exception.  For example, suppose we have the following code: Task taskOne = new Task( () => { throw new ApplicationException("Random Exception!"); }); Task taskTwo = new Task( () => { throw new ArgumentException("Different exception here"); }); // Start the tasks taskOne.Start(); taskTwo.Start(); try { Task.WaitAll(new[] { taskOne, taskTwo }); } catch (AggregateException e) { Console.WriteLine(e.InnerExceptions.Count); foreach (var inner in e.InnerExceptions) Console.WriteLine(inner.Message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, our routine will print: 2 Different exception here Random Exception! Note that we had two separate tasks, each of which raised two distinctly different types of exceptions.  We can handle this cleanly, with very little code, in a much nicer manner than the Asynchronous Programming API.  We no longer need to handle TargetInvocationException or worry about implementing the Event-based Asynchronous Pattern properly by setting the AsyncCompletedEventArgs.Error property.  Instead, we just raise our exception as normal, and handle AggregateException in a single location in our calling code.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >