Search Results

Search found 223 results on 9 pages for 'charlie pigarelli'.

Page 2/9 | < Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >

  • What did Stallman mean in this quote about implementing other languages in Lisp?

    - by Charlie Flowers
    I just read the following quote from Stallman as part of a speech he gave many years ago. He's talking about how it is feasible to implement other programming languages in Lisp, but not feasible to implement Lisp in those other programming languages. He seems to take for granted that the listeners/readers understand why. But I don't see why. I think the answer will explain something about Lisp to me, and I'd like to understand it. Can someone explain it? Here's the quote: "There's an interesting benefit you can get from using such a powerful language as a version of Lisp as your primary extensibility language. You can implement other languages by translating them into your primary language. If your primary language is TCL, you can't very easily implement Lisp by translating it into TCL. But if your primary language is Lisp, it's not that hard to implement other things by translating them." The full speech is here: http://www.gnu.org/gnu/rms-lisp.html Thanks.

    Read the article

  • Godaddy VPS or swith to another provider?

    - by Charlie
    Long story short, I want to be able to have close to full control over my server, mainly being able to install things on my sever that Godaddy shared hosting does not allow (gzip, etc..). Should I switch providers to something else (all my domains are hosted there) or upgrade my hosting to VPS. If I upgrade, how hard will it be to set it up without their "assisted service" or something setup (where they do virus scanning, etc.)?

    Read the article

  • Redirect from https://mydomain.com to http://mydomain.com

    - by Charlie
    Many of my visitors have bookmarked my site already wtih https://mydomain.com. Under the bad advice of a programmer, I put my whole Joomla site under ssl. I do not sell anything or provide any member services. I asked him many times if it would slow my site down - he said it wouldn't. I knew it did, I've researched on this site and realized it does slow the site down because of no cache of the pages. Understood. Please, someone tell me how to get away from it now. I'm not sure how to approach this, should I add something to my htaccess or my main index.php file? I've looked all over the net, there is much advice for adding redirectives for going from http to https, but very few answers regarding the opposite of going from https to http. Thank you very much for your time. I appreciate it.

    Read the article

  • Recorded YouTube-like presentation and "live" demos of Oracle Advanced Analytics

    - by chberger
    Ever want to just sit and watch a YouTube-like presentation and "live" demos of Oracle Advanced Analytics?  Then ' target=""click here! This 1+ hour long session focuses primarily on the Oracle Data Mining component of the Oracle Advanced Analytics Option and is tied to the Oracle SQL Developer Days virtual and onsite events.   I cover: Big Data + Big Data Analytics Competing on analytics & value proposition What is data mining? Typical use cases Oracle Data Mining high performance in-database SQL based data mining functions Exadata "smart scan" scoring Oracle Data Miner GUI (an Extension that ships with SQL Developer) Oracle Business Intelligence EE + Oracle Data Mining resutls/predictions in dashboards Applications "powered by Oracle Data Mining for factory installed predictive analytics methodologies Oracle R Enterprise Please contact charlie[email protected] should you have any questions.  Hope you enjoy!  Charlie Berger, Sr. Director of Product Management, Oracle Data Mining & Advanced Analytics, Oracle Corporation

    Read the article

  • Video Presentation and Demo of Oracle Advanced Analytics & Data Mining

    - by Mike.Hallett(at)Oracle-BI&EPM
    For a video presentation and demonstration of Oracle Advanced Analytics & Data Mining  click here. (This plays a large MP4 file in a browser: access is from Google.docs, and this works best with Google CHROME). This one hour session focuses primarily on the Oracle Data Mining component of the Oracle Advanced Analytics Option along with Oracle R Enterprise and is tied to the Oracle SQL Developer Days virtual and onsite events and is presented by Oracle’s Director for Advanced Analytics, Charlie Berger, covering: Big Data + Big Data Analytics Competing on analytics & value proposition What is data mining? Typical use cases Oracle Data Mining high performance in-database SQL based data mining functions Exadata "smart scan" scoring Oracle Data Miner GUI (an Extension that ships with SQL Developer) Oracle Business Intelligence EE + Oracle Data Mining results/predictions in dashboards Applications "powered by Oracle Data Mining" for factory installed predictive analytics methodologies Oracle R Enterprise Please contact charlie[email protected] should you have any questions. 

    Read the article

  • New R Interface to Oracle Data Mining Available for Download

    - by charlie.berger
      The R Interface to Oracle Data Mining ( R-ODM) allows R users to access the power of Oracle Data Mining's in-database functions using the familiar R syntax. R-ODM provides a powerful environment for prototyping data analysis and data mining methodologies. R-ODM is especially useful for: Quick prototyping of vertical or domain-based applications where the Oracle Database supports the application Scripting of "production" data mining methodologies Customizing graphics of ODM data mining results (examples: classification, regression, anomaly detection) The R-ODM interface allows R users to mine data using Oracle Data Mining from the R programming environment. It consists of a set of function wrappers written in source R language that pass data and parameters from the R environment to the Oracle RDBMS enterprise edition as standard user PL/SQL queries via an ODBC interface. The R-ODM interface code is a thin layer of logic and SQL that calls through an ODBC interface. R-ODM does not use or expose any Oracle product code as it is completely an external interface and not part of any Oracle product. R-ODM is similar to the example scripts (e.g., the PL/SQL demo code) that illustrates the use of Oracle Data Mining, for example, how to create Data Mining models, pass arguments, retrieve results etc. R-ODM is packaged as a standard R source package and is distributed freely as part of the R environment's Comprehensive R Archive Network (CRAN). For information about the R environment, R packages and CRAN, see www.r-project.org. R-ODM is particularly intended for data analysts and statisticians familiar with R but not necessarily familiar with the Oracle database environment or PL/SQL. It is a convenient environment to rapidly experiment and prototype Data Mining models and applications. Data Mining models prototyped in the R environment can easily be deployed in their final form in the database environment, just like any other standard Oracle Data Mining model. What is R? R is a system for statistical computation and graphics. It consists of a language plus a run-time environment with graphics, a debugger, access to certain system functions, and the ability to run programs stored in script files. The design of R has been heavily influenced by two existing languages: Becker, Chambers & Wilks' S and Sussman's Scheme. Whereas the resulting language is very similar in appearance to S, the underlying implementation and semantics are derived from Scheme. R was initially written by Ross Ihaka and Robert Gentleman at the Department of Statistics of the University of Auckland in Auckland, New Zealand. Since mid-1997 there has been a core group (the "R Core Team") who can modify the R source code archive. Besides this core group many R users have contributed application code as represented in the near 1,500 publicly-available packages in the CRAN archive (which has shown exponential growth since 2001; R News Volume 8/2, October 2008). Today the R community is a vibrant and growing group of dozens of thousands of users worldwide. It is free software distributed under a GNU-style copyleft, and an official part of the GNU project ("GNU S"). Resources: R website / CRAN R-ODM

    Read the article

  • What label of tests are BizUnit tests?

    - by charlie.mott
    BizUnit is defined as a "Framework for Automated Testing of Distributed Systems.  However, I've never seen a catchy label to describe what sort of tests we create using this framework. They are not really “Unit Tests” that's for sure. "Integration Tests" might be a good definition, but I want a label that clearly separates it from the manual "System Integration Testing" phase of a project where real instances of the integrated systems are used. Among some colleagues, we brainstormed some suggestions: Automated Integration Tests Stubbed Integration Tests Sandbox Integration Tests Localised Integration Tests All give a good view of the sorts of tests that are being done. I think "Stubbed Integration Tests" is most catchy and descriptive. So I will use that until someone comes up with a better idea.

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • TODO Formatting

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott TODO's should only be used for a short period of time to remind you that something needs to be done. They should then be addressed as soon as possible. In order to know who owns a TODO task and how long it’s been outstanding, my company uses the following standard for TODO formatting: Format:     // TODO : Owner Initials – Date Created – Description of task. Sample:     // TODO: CM – 2012/01/20 – Move this class to a new location so it can be reused. Using this pattern makes it easy to use the Resharper TODO explorer. The Carrot In order to make it easy for developers to apply this rule, a code snippet can be created in Visual Studio. Even better, I created a Resharper template. This gives the facility to use the current user name and current date macros. image This actually makes the formatting look like this. Sample:     // TODO: cmott – 2012/01/20 – Move this class to a new location so it can be reused. The Stick How to you enforce such a rule? I tried to create a custom Resharper Highlighting Pattern to perform custom code analysis inspection for deviations from this pattern. However, I did not have any success. The find dialog would not accept // text. If I work it out, I will update this blog post. StyleCop Instead I created a custom StyleCop rule. I followed the approach used with the StyleCop Contrib project. This provides a simple to use base class and easy to use unit testing framework. I will upload this todo format analyzer as a patch to that project. image

    Read the article

  • Azure Task Scheduling Options

    - by charlie.mott
    Currently, the Azure PaaS does not offer a distributed\resilient task scheduling service.  If you do want to host a task scheduling product\solution off-premise (and ideally use Azure), what are your options? PaaS Option 1: Worker Roles Use a worker role to schedule and execute actions at specific time periods.  There are a few frameworks available to assist with this: http://azuretoolkit.codeplex.com https://github.com/Lokad/lokad-cloud/wiki/TaskScheduler http://blog.smarx.com/posts/building-a-task-scheduler-in-windows-azure - This addresses a slightly different set of requirements. It’s a more dynamic approach for queuing up tasks, but not repeatable tasks (e.g. daily). I found the Azure Toolkit option the most simple to implement.  Step 1 : Create a domain entity implementing IJob for each job to schedule.  In this sample, I asynchronously call a WCF service method. 1: namespace Acme.WorkerRole.Jobs 2: { 3: using AzureToolkit; 4: using ScheduledTasksService; 5: 6: public class UploadEmployeesJob : IJob 7: { 8: public void Run() 9: { 10: // Call Tasks Service 11: var client = new ScheduledTasksServiceClient("BasicHttpBinding_IScheduledTasksService"); 12: client.UploadEmployees(); 13: client.Close(); 14: } 15: } 16: } Step 2 : In the worker role run method, add the jobs to the toolkit engine. 1: namespace Acme.WorkerRole 2: { 3: using AzureToolkit.Engine; 4: using Jobs; 5:   6: public class WorkerRole : WorkerRoleEntryPoint 7: { 8: public override void Run() 9: { 10: var engine = new CloudEngine(); 11:   12: // Add Scheduled Jobs (using CronJob syntax - see http://www.adminschoice.com/crontab-quick-reference). 13:   14: // 1. Upload Employee job - 8.00 PM every weekday (Mon-Fri) 15: engine.WithJobScheduler().ScheduleJob<UploadEmployeesJob>(c => { c.CronSchedule = "0 20 * * 1-5"; }); 16: // 2. Purge Data job - 10 AM every Saturday 17: engine.WithJobScheduler().ScheduleJob<PurgeDataJob>(c => { c.CronSchedule = "0 10 * * 6"; }); 18: // 3. Process Exceptions job - Every 5 minutes 19: engine.WithJobScheduler().ScheduleJob<ProcessExceptionsJob>(c => { c.CronSchedule = "*/5 * * * *"; }); 20:   21: engine.Run(); 22: base.Run(); 23: } 24: } 25: } Pros Cons Azure Toolkit option is simple to implement. For the AzureToolkit option, you are limited to a single worker role.  Otherwise, the jobs will be executed multiple times, once for each worker role instance.   Paying for a continuously running worker role, even if it just processes a single job once a week.  If you only have a few scheduled tasks to run calling asynchronous services hosted in different web roles, an extra small worker role likely to be sufficient.  However, for an extra small worker role this still costs $14.40/month (03/09/2012). Option 2: Use Scheduled Task on Azure Web Role calling a console app Setup a Windows Scheduled Task on the Azure Web Role. This calls a console application that calls the WCF service methods that run the task actions. This design is described here: http://www.ronaldwidha.net/2011/02/23/cron-job-on-azure-using-scheduled-task-on-a-web-role-to-replace-azure-worker-role-for-background-job/ http://www.voiceoftech.com/swhitley/index.php/2011/07/windows-azure-task-scheduler/ http://devlicio.us/blogs/vinull/archive/2011/10/23/moving-to-azure-worker-roles-for-nothing-and-tasks-for-free.aspx Pros Cons Fairly easy to implement. Supportability - I RDC’ed onto the Azure server and stopped the scheduled task. I then rebooted the machine and the task was re-started. I also tried deleting the task and rebooting, the same thing occurred. The only way to permanently guarantee that a task is disabled is to do a fresh deployment. I think this is a major supportability concern.   Saleability - multiple instances would trigger multiple tasks. You can only have one instance for the scheduled task web role. The guidance implements setup of the scheduled task as part of a web role instance. But if you have more than one instance in a web role, the task will be triggered multiple times for each scheduled action (once per machine). Workaround: If we wanted to use scheduled tasks for another client with a saleable WCF service, then we could include the console & tasks scripts in a separate web role (e.g. a empty WCF service with no real purpose to it). SaaS Option 3: Azure Marketplace I thought that someone might be offering this type of service via the Azure marketplace. At the point of writing this blog post, I did not find anyone doing so. https://datamarket.azure.com/ Pros Cons   Nobody currently offers this on the Azure Marketplace. Option 4: Online Job Scheduling Service Provider There are plenty of online providers that offer this type of service on a pay-as-you-go approach.  Some of these are free for small usage.   Many of these providers are listed here: http://en.wikipedia.org/wiki/Webcron Pros Cons No bespoke development for scheduler. Reliance on third party. IaaS Option 5: Setup Scheduling Software on Azure IaaS VM’s One of job scheduling software offerings could be installed and configured on Azure VM’s.  A list of software options is listed here: http://en.wikipedia.org/wiki/List_of_job_scheduler_software Pros Cons Enterprise distributed\resilient task scheduling service VM Setup and maintenance   Software Licence Costs Option 6: VM Gallery A the time of writing this blog post, I did not spot a VM in the gallery that included pre-installation of any of the above software options. Pros Cons   No current VM template. Summary For my current project that had a small handful of tasks to schedule with a limited project budget I chose option 1 (a worker role using the Azure Toolkit to schedule tasks).  If I was building an enterprise scale solution for the future, options 4 and 5 are currently worthy of consideration. Hopefully, Microsoft will include tasks scheduling in the future as part of their PaaS offerings.

    Read the article

  • Ubuntu dual-boot errors in 13.10

    - by Charlie
    Specs: Lenovo U530, 500 GB HDD, 8 GB RAM, 64-bit, i7 Intel Processor with integrated graphics I'm trying to dual-boot Ubuntu but I have the following problems: After booting from live DVD, the GRUB loads but with the following errors: "Could not open '\EFI\BOOT\fallback.efi': 14" and "error: variable 'root' isn't set. I have disabled Fast Startup and SecureBoot before booting the disc. Also, after trying to "Try Ubuntu without installing" my screen, and DVD Drive shutoff momentarily but then my drive boots up but my screen remains black and sometimes flashes but doesn't display anything. I will greatly appreciate any help. P.S. I don't have any previous version of Ubuntu installed so this is NOT an upgrade from a previous version.

    Read the article

  • GZipped Images - Is It Worth?

    - by charlie
    Most image formats are already compressed. But in fact, if I take an image and compress it [gzipping it], and then I compare the compressed one to the uncompressed one, there is a difference in size, even though not such a dramatic difference. The question is: is it worth gzipping images? the content size flushed down to the client's browser will be smaller, but there will be some client overhead when de-gzipping it. Please advise.

    Read the article

  • Bluetooth issues after Saucy upgrade

    - by charlie
    I upgraded from Raring to Saucy, and now my bluetooth is giving me problems. First, even though I have a bluetooth inbuilt in my laptop, which I prefer to keep off unless I am using it, there is no greyed out bluetooth icon in the notification area. This was present in Quantal, but was missing in Raring too. I was hoping to get this fixed in Saucy. Seems otherwise. Secondly, and more importantly, in saucy when I am turning on my bluetooth, it immediately (or within seconds) turns off automatically, without giving me any time to even make the device visible. Please help!

    Read the article

  • Resize/Create a New Partition in GParted

    - by Charlie
    So I've been having some trouble recently with Ubuntu and decided it was time to switch to windows. But I have no ntfs partitions on my hard disk and GParted will not let me resize my one large partition (/dev/sda1) so that I can allocate some ntfs space to install windows on. Any help would be greatly appreciated, I've had this problem for quite some time now and it had just become one big headache. Thanks in advance!

    Read the article

  • T_INLINE_HTML? What's wrong with this?

    - by Charlie Pigarelli
    <? switch($data['type']) : ?> <? case 'log': ?> <? while ($row = $data['loop']->fetch()) : ?> <table class="t-errors"> <tr> <td> <b>IP:</b> <? echo $row['LogShellIP']; ?> <b>Command:</b> <? echo $row['LogShellCommand']; ?> <b>Executed:</b> <? echo $row['LogShellReturn']; ?> <b>Time:</b> <? echo format::time($row['LogShellTime']); ?> </td> </tr> </table> <? endwhile; ?> <? break; ?> <? case 'fatal': ?> <? case 'warning': ?> <? case 'notice': ?> <? case 'unknown': ?> <? while ($row = $data['loop']->fetch()) : ?> <table class="t-errors"> <tr> <td <? if ($row['LogErrorSeen'] == 0) { echo 'class="e-selected"'; } ?>> <b>String:</b> <? echo $row['LogErrorString']; ?> <b>File:</b> <? echo $row['LogErrorFile']; ?> <b>Line:</b> <? echo $row['LogErrorLine']; ?> <b>Context:</b> <? echo $row['LogErrorContext']; ?> <b>Ip:</b> <? echo $row['LogErrorIP']; ?> <b>Time:</b> <? echo format::time($row['LogErrorTime']); ?> </td> </tr> </table> <? endwhile; ?> <? break; ?> <? endswitch; ? I'm getting this error: Parse error: syntax error, unexpected T_INLINE_HTML, expecting T_ENDSWITCH or T_CASE or T_DEFAULT in /Applications/XAMPP/xamppfiles/htdocs/Smooth Framework/tpl/terminal.tpl.php on line 33 Where line 33 is the line 2 of this script. This is inserted in a template context. What's wrong with this? He is expecting a T_CASE and that's what is there!

    Read the article

  • Lazy loading? Better avoiding it?

    - by Charlie Pigarelli
    I just read about this design pattern: Lazy Load. And, since in the application i'm working on i have all the classes in one folder, i was wondering if this pattern could just make me avoiding the include() function for every class. I mean: It's nice to know that if i forgot to include a class, PHP, before falling into an error, trough an __autoload() function try to get it. But is it fine enough to just don't care about including classes and let PHP do it by your own every time? Or we should write __autoload() just in case it is needed?

    Read the article

  • Static methods requiring var

    - by Charlie Pigarelli
    Ok, i'm stuck on this, why don't i get what i need? class config { private $config; # Load configurations public function __construct() { loadConfig('site'); // load a file with $cf in it loadConfig('database'); // load another file with $cf in it $this->config = $cf; // $cf is an array unset($cf); } # Get a configuration public static function get($tag, $name) { return $this->config[$tag][$name]; } } I'm getting this: Fatal error: Using $this when not in object context in [this file] on line 22 [return $this->config[$tag][$name];] And i need to call the method in this way: config::get()...

    Read the article

  • Has anyone managed to build php5-xapian on Ubuntu 12.04?

    - by jetboy
    As Xapian's been dropped from the Ubuntu repositories, I'm attempting to build my own .deb from the instructions here: http://article.gmane.org/gmane.comp.search.xapian.general/8855 http://beeznest.wordpress.com/2011/07/06/howto-build-your-own-binaries-of-php-xapian-bindings-for-debian/ I can only get things to progress beyond the first few seconds by leaving out 'rm debian/control', but if I do, it looks as if the Python and Ruby bindings are building and passing their versions of smoketest correctly. However, the PHP part of the build is failing with this error: /home/charlie/xapian-bindings-1.2.8/php/smoketest.php:38: include(xapian.php): failed to open stream: No such file or directory FAIL: smoketest.php There's a xapian.php file in /home/charlie/xapian-bindings-1.2.8/php/php5/ but if I copy it to /home/charlie/xapian-bindings-1.2.8/php/ or change the path to it in smoketest.php, the build fails right near the start with: dpkg-source: error: aborting due to unexpected upstream changes Unfortunately I'm out of my comfort zone building from source. Anyone got any ideas? Edit post James' answer: Builds fine if I follow instructions exactly. I built it on a test VM initially, but that didn't build the PHP package as PHP itself wasn't installed. Obvious gotcha, but worth mentioning. Installing generated the following error: Setting up php5-xapian (1.2.8-1) ... Processing triggers for libapache2-mod-php5 ... dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/libapache2-mod-php5.postinst): Permission denied ssion denied dpkg: error processing libapache2-mod-php5 (--install): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: libapache2-mod-php5 It's only a script for restarting Apache. Stopping Apache before running sudo dpkg -i php5-xapian_*.deb prevents the error. Xapian now shows up in phpinfo(). Job done. Thanks.

    Read the article

  • Regex to repeat a capture across a CDL?

    - by richardtallent
    I have some data in this form: @"Managers Alice, Bob, Charlie Supervisors Don, Edward, Francis" I need a flat output like this: @"Managers Alice Managers Bob Managers Charlie Supervisors Don Supervisors Edward Supervisors Francis" The actual "job title" above could be any single word, there's no discrete list to work from. Replacing the ,  with \r\n is easy enough, as is the first replacement: Replace (^|\r\n)(\S+\s)([^,\r\n]*),\s With $1$2$3\r\n$2 But capturing the other names and applying the same prefix is what is eluding me today. Any suggestions?

    Read the article

  • I have bluescreen-phobia

    - by Charlie Somerville
    I'm scared of the Blue Screen of Death. Seriously. Whenever I hibernate or shutdown my computer, I have to turn the screen off in case it bluescreens during that process. Whenever I see a bluescreen, it makes my heart skip a beat and I jump a little - especially if it's on my own computer. I decided that this is getting ridiculous after I experienced a BSOD this evening. My computer booted back up and I decided to switch off the screen and leave the room. I came back about a minute later and it still hadn't got to the login screen. To find out what happened, I actually covered the screen with two A4 pages and gradually peeled them back after I saw that the screen wasn't blue. This is going way too far, but I can't help it. I am legitimately afraid of BSODs. Does anyone have any advice on how I can help myself?

    Read the article

  • MPM Prefork Apache Uses Absurd Amount of Memory

    - by Charlie JM
    Help! My apache processes are all using 115MB of memory on startup. Relevant information: Linux version (uname -a) Linux 2.6.31-14-generic-pae #48-Ubuntu SMP Fri Oct 16 15:22:42 UTC 2009 i686 GNU/Linux Apache version (/usr/sbin/apache2 -v) Server version: Apache/2.2.8 (Ubuntu) Server built: Mar 9 2010 20:45:36 Top display (top -u www-data) PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23377 www-data 20 0 115m 94m 3908 S 28 1.6 0:04.59 apache2 23375 www-data 20 0 119m 99m 5892 S 9 1.6 0:05.04 apache2 23324 www-data 20 0 116m 96m 5144 S 2 1.6 0:04.73 apache2 23283 www-data 20 0 115m 95m 4480 S 1 1.6 0:04.89 apache2 23259 www-data 20 0 116m 96m 5380 S 0 1.6 0:05.55 apache2 23370 www-data 20 0 115m 94m 4396 S 0 1.6 0:04.75 apache2 23229 www-data 20 0 116m 96m 6096 S 0 1.6 0:05.43 apache2 ... and so on ... Memory map (pmap $(pidof apache2)) (actually, just one apache2 process) Most of the memory is [anon], see line 5 23324: /usr/sbin/apache2 -k start 08048000 332K r-x-- /usr/sbin/apache2 0809b000 8K rw--- /usr/sbin/apache2 0809d000 12K rw--- [ anon ] 093a0000 92812K rw--- [ anon ] b5b6c000 4K rw--- [ anon ] b5b6d000 512K rw-s- [ shmid=0x13528003 ] b5fa8000 16K r-x-- /lib/tls/i686/cmov/libnss_dns-2.7.so b5fac000 8K rw--- /lib/tls/i686/cmov/libnss_dns-2.7.so b5fae000 120K r-x-- /usr/lib/php5/20060613+lfs/suhosin.so b5fcc000 16K rw--- /usr/lib/php5/20060613+lfs/suhosin.so b5fd0000 4K rw--- [ anon ] b5fd1000 76K r-x-- /usr/lib/php5/20060613+lfs/pdo.so b5fe4000 8K rw--- /usr/lib/php5/20060613+lfs/pdo.so b5fe6000 92K r-x-- /usr/lib/php5/20060613+lfs/mysqli.so b5ffd000 8K rw--- /usr/lib/php5/20060613+lfs/mysqli.so b5fff000 1648K r-x-- /usr/lib/libmysqlclient.so.15.0.0 b619b000 268K rw--- /usr/lib/libmysqlclient.so.15.0.0 b61de000 4K rw--- [ anon ] b61f0000 92K r-x-- /usr/lib/libxcb.so.1.0.0 b6207000 4K rw--- /usr/lib/libxcb.so.1.0.0 b6208000 164K r-x-- /usr/lib/libfontconfig.so.1.3.0 b6231000 4K rw--- /usr/lib/libfontconfig.so.1.3.0 b6232000 124K r-x-- /usr/lib/libjpeg.so.62.0.0 b6251000 4K rw--- /usr/lib/libjpeg.so.62.0.0 b6252000 136K r-x-- /usr/lib/libpng12.so.0.15.0 b6274000 4K rw--- /usr/lib/libpng12.so.0.15.0 b6275000 60K r-x-- /usr/lib/libXpm.so.4.11.0 b6284000 4K rw--- /usr/lib/libXpm.so.4.11.0 b6285000 912K r-x-- /usr/lib/libX11.so.6.2.0 b6369000 12K rw--- /usr/lib/libX11.so.6.2.0 b636c000 424K r-x-- /usr/lib/libfreetype.so.6.3.16 b63d6000 12K rw--- /usr/lib/libfreetype.so.6.3.16 b63d9000 236K r-x-- /usr/lib/libt1.so.5.1.1 b6414000 12K rw--- /usr/lib/libt1.so.5.1.1 b6417000 84K rw--- [ anon ] b642c000 116K r-x-- /usr/lib/libgd.so.2.0.0 b6449000 128K rw--- /usr/lib/libgd.so.2.0.0 b6469000 16K rw--- [ anon ] b646d000 88K r-x-- /usr/lib/php5/20060613+lfs/gd.so b6483000 16K rw--- /usr/lib/php5/20060613+lfs/gd.so b6487000 192K r-x-- /usr/lib/libidn.so.11.5.30 b64b7000 4K rw--- /usr/lib/libidn.so.11.5.30 b64b8000 232K r-x-- /usr/lib/libcurl.so.4.0.1 b64f2000 4K rw--- /usr/lib/libcurl.so.4.0.1 b64f8000 44K r-x-- /usr/lib/php5/20060613+lfs/mysql.so b6503000 4K rw--- /usr/lib/php5/20060613+lfs/mysql.so b6504000 268K r-x-- /usr/lib/libgmp.so.3.4.2 b6547000 4K rw--- /usr/lib/libgmp.so.3.4.2 b6548000 648K r-x-- /usr/lib/libclamav.so.5.0.4 b65ea000 44K rw--- /usr/lib/libclamav.so.5.0.4 b65f8000 52K r-x-- /usr/lib/php5/20060613+lfs/curl.so b6605000 4K rw--- /usr/lib/php5/20060613+lfs/curl.so b6606000 148K r-x-- /usr/lib/libmcrypt.so.4.4.7 b662b000 8K rw--- /usr/lib/libmcrypt.so.4.4.7 b662d000 28K rw--- [ anon ] b6634000 24K r-x-- /usr/lib/php5/20060613+lfs/pdo_mysql.so b663a000 4K rw--- /usr/lib/php5/20060613+lfs/pdo_mysql.so b663b000 16K r-x-- /usr/lib/libXdmcp.so.6.0.0 b663f000 4K rw--- /usr/lib/libXdmcp.so.6.0.0 b6640000 12K r-x-- /usr/lib/php5/20060613+lfs/clamav.so b6643000 4K rw--- /usr/lib/php5/20060613+lfs/clamav.so b6644000 1036K r-x-- /usr/lib/libc-client.so.2007.0 b6747000 28K rw--- /usr/lib/libc-client.so.2007.0 b674e000 4K rw--- [ anon ] b6750000 24K r-x-- /usr/lib/libltdl.so.3.1.6 b6756000 4K rw--- /usr/lib/libltdl.so.3.1.6 b6757000 32K r-x-- /usr/lib/php5/20060613+lfs/mcrypt.so b675f000 4K rw--- /usr/lib/php5/20060613+lfs/mcrypt.so b6760000 88K r-x-- /usr/lib/php5/20060613+lfs/imap.so b6776000 4K rw--- /usr/lib/php5/20060613+lfs/imap.so b6777000 104K r-x-- /usr/local/lib/libssh2.so b6791000 4K rw--- /usr/local/lib/libssh2.so b6792000 1324K r-x-- /usr/lib/ZendOptimizer.so b68dd000 68K rw--- /usr/lib/ZendOptimizer.so b68ee000 20K rw--- [ anon ] b68f3000 8K r-x-- /usr/lib/libXau.so.6.0.0 b68f5000 4K rw--- /usr/lib/libXau.so.6.0.0 b68f6000 52K r-x-- /usr/lib/php5/20060613+lfs/ssh2.so b6903000 4K rw--- /usr/lib/php5/20060613+lfs/ssh2.so b6904000 252K r---- /usr/lib/locale/en_US.utf8/LC_CTYPE b6974000 64K rw-s- /dev/zero (deleted) b6984000 36K r-x-- /lib/tls/i686/cmov/libnss_files-2.7.so b698d000 8K rw--- /lib/tls/i686/cmov/libnss_files-2.7.so b698f000 32K r-x-- /lib/tls/i686/cmov/libnss_nis-2.7.so b6997000 8K rw--- /lib/tls/i686/cmov/libnss_nis-2.7.so b6999000 28K r-x-- /lib/tls/i686/cmov/libnss_compat-2.7.so b69a0000 8K rw--- /lib/tls/i686/cmov/libnss_compat-2.7.so b69a2000 36K r-x-- /lib/libpam.so.0.81.6 b69ab000 4K rw--- /lib/libpam.so.0.81.6 b69ac000 28K r--s- /usr/lib/gconv/gconv-modules.cache b69b3000 8K r-x-- /usr/lib/apache2/modules/mod_userdir.so b69b5000 4K rw--- /usr/lib/apache2/modules/mod_userdir.so b69b6000 148K r-x-- /usr/lib/apache2/modules/mod_ssl.so b69db000 8K rw--- /usr/lib/apache2/modules/mod_ssl.so b69dd000 8K rw--- [ anon ] b69df000 8K r-x-- /usr/lib/apache2/modules/mod_setenvif.so b69e1000 4K rw--- /usr/lib/apache2/modules/mod_setenvif.so b69e2000 1128K r-x-- /usr/lib/libxml2.so.2.6.31 b6afc000 20K rw--- /usr/lib/libxml2.so.2.6.31 b6b01000 4K rw--- [ anon ] b6b02000 80K r-x-- /lib/tls/i686/cmov/libnsl-2.7.so b6b16000 8K rw--- /lib/tls/i686/cmov/libnsl-2.7.so b6b18000 8K rw--- [ anon ] b6b1a000 140K r-x-- /lib/tls/i686/cmov/libm-2.7.so b6b3d000 8K rw--- /lib/tls/i686/cmov/libm-2.7.so b6b3f000 60K r-x-- /lib/libbz2.so.1.0.4 b6b4e000 4K rw--- /lib/libbz2.so.1.0.4 b6b4f000 4K r-x-- /usr/lib/libxcb-xlib.so.0.0.0 b6b50000 4K rw--- /usr/lib/libxcb-xlib.so.0.0.0 b6b51000 56K r-x-- /usr/lib/apache2/modules/mod_rewrite.so b6b5f000 4K rw--- /usr/lib/apache2/modules/mod_rewrite.so b6b60000 5060K r-x-- /usr/lib/apache2/modules/libphp5.so b7051000 208K rw--- /usr/lib/apache2/modules/libphp5.so b7085000 20K rw--- [ anon ] b708a000 28K r-x-- /usr/lib/apache2/modules/mod_negotiation.so b7091000 4K rw--- /usr/lib/apache2/modules/mod_negotiation.so b7092000 12K r-x-- /usr/lib/apache2/modules/mod_mime.so b7095000 4K rw--- /usr/lib/apache2/modules/mod_mime.so b7096000 36K r-x-- /usr/lib/apache2/modules/mod_include.so b709f000 4K rw--- /usr/lib/apache2/modules/mod_include.so b70a0000 4K r-x-- /usr/lib/apache2/modules/mod_env.so b70a1000 4K rw--- /usr/lib/apache2/modules/mod_env.so b70a2000 4K r-x-- /usr/lib/apache2/modules/mod_dir.so b70a3000 4K rw--- /usr/lib/apache2/modules/mod_dir.so b70a4000 20K r-x-- /usr/lib/apache2/modules/mod_cgi.so b70a9000 4K rw--- /usr/lib/apache2/modules/mod_cgi.so b70aa000 28K r-x-- /usr/lib/apache2/modules/mod_autoindex.so b70b1000 4K rw--- /usr/lib/apache2/modules/mod_autoindex.so b70b2000 4K r-x-- /usr/lib/apache2/modules/mod_authz_user.so b70b3000 4K rw--- /usr/lib/apache2/modules/mod_authz_user.so b70b4000 8K r-x-- /usr/lib/apache2/modules/mod_authz_host.so b70b6000 4K rw--- /usr/lib/apache2/modules/mod_authz_host.so b70b7000 8K r-x-- /usr/lib/apache2/modules/mod_authz_groupfile.so b70b9000 4K rw--- /usr/lib/apache2/modules/mod_authz_groupfile.so b70ba000 8K rw--- [ anon ] b70bc000 12K r-x-- /lib/libgpg-error.so.0.3.0 b70bf000 4K rw--- /lib/libgpg-error.so.0.3.0 b70c0000 4K rw--- [ anon ] b70c1000 8K r-x-- /lib/libkeyutils-1.2.so b70c3000 4K rw--- /lib/libkeyutils-1.2.so b70c4000 28K r-x-- /usr/lib/libkrb5support.so.0.1 b70cb000 4K rw--- /usr/lib/libkrb5support.so.0.1 b70cc000 136K r-x-- /usr/lib/libk5crypto.so.3.1 b70ee000 4K rw--- /usr/lib/libk5crypto.so.3.1 b70ef000 300K r-x-- /lib/libgcrypt.so.11.2.3 b713a000 8K rw--- /lib/libgcrypt.so.11.2.3 b713c000 80K r-x-- /usr/lib/libz.so.1.2.3.3 b7150000 4K rw--- /usr/lib/libz.so.1.2.3.3 b7151000 4K rw--- [ anon ] b7152000 60K r-x-- /usr/lib/libtasn1.so.3.0.12 b7161000 4K rw--- /usr/lib/libtasn1.so.3.0.12 b7162000 160K r-x-- /usr/lib/libgssapi_krb5.so.2.2 b718a000 4K rw--- /usr/lib/libgssapi_krb5.so.2.2 b718b000 8K r-x-- /lib/libcom_err.so.2.1 b718d000 4K rw--- /lib/libcom_err.so.2.1 b718e000 556K r-x-- /usr/lib/libkrb5.so.3.3 b7219000 8K rw--- /usr/lib/libkrb5.so.3.3 b721b000 1192K r-x-- /usr/lib/i686/cmov/libcrypto.so.0.9.8 b7345000 84K rw--- /usr/lib/i686/cmov/libcrypto.so.0.9.8 b735a000 16K rw--- [ anon ] b735e000 248K r-x-- /usr/lib/i686/cmov/libssl.so.0.9.8 b739c000 16K rw--- /usr/lib/i686/cmov/libssl.so.0.9.8 b73a0000 452K r-x-- /usr/lib/libgnutls.so.13.9.1 b7411000 20K rw--- /usr/lib/libgnutls.so.13.9.1 b7416000 88K r-x-- /usr/lib/libsasl2.so.2.0.22 b742c000 4K rw--- /usr/lib/libsasl2.so.2.0.22 b742d000 60K r-x-- /lib/tls/i686/cmov/libresolv-2.7.so b743c000 8K rw--- /lib/tls/i686/cmov/libresolv-2.7.so b743e000 8K rw--- [ anon ] b7440000 8K r-x-- /lib/tls/i686/cmov/libdl-2.7.so b7442000 8K rw--- /lib/tls/i686/cmov/libdl-2.7.so b7444000 36K r-x-- /lib/tls/i686/cmov/libcrypt-2.7.so b744d000 8K rw--- /lib/tls/i686/cmov/libcrypt-2.7.so b744f000 160K rw--- [ anon ] b7477000 28K r-x-- /lib/tls/i686/cmov/librt-2.7.so b747e000 8K rw--- /lib/tls/i686/cmov/librt-2.7.so b7480000 12K r-x-- /lib/libuuid.so.1.2 b7483000 4K rw--- /lib/libuuid.so.1.2 b7484000 124K r-x-- /usr/lib/libexpat.so.1.5.2 b74a3000 8K rw--- /usr/lib/libexpat.so.1.5.2 b74a5000 396K r-x-- /usr/lib/libsqlite3.so.0.8.6 b7508000 8K rw--- /usr/lib/libsqlite3.so.0.8.6 b750a000 120K r-x-- /usr/lib/libpq.so.5.1 b7528000 4K rw--- /usr/lib/libpq.so.5.1 b7529000 1172K r-x-- /usr/lib/libdb-4.6.so b764e000 8K rw--- /usr/lib/libdb-4.6.so b7650000 4K rw--- [ anon ] b7651000 48K r-x-- /usr/lib/liblber-2.4.so.2.0.5 b765d000 4K rw--- /usr/lib/liblber-2.4.so.2.0.5 b765e000 244K r-x-- /usr/lib/libldap_r-2.4.so.2.0.5 b769b000 4K rw--- /usr/lib/libldap_r-2.4.so.2.0.5 b769c000 8K rw--- [ anon ] b769e000 1316K r-x-- /lib/tls/i686/cmov/libc-2.7.so b77e7000 4K r---- /lib/tls/i686/cmov/libc-2.7.so b77e8000 8K rw--- /lib/tls/i686/cmov/libc-2.7.so b77ea000 12K rw--- [ anon ] b77ed000 80K r-x-- /lib/tls/i686/cmov/libpthread-2.7.so b7801000 8K rw--- /lib/tls/i686/cmov/libpthread-2.7.so b7803000 8K rw--- [ anon ] b7805000 136K r-x-- /usr/lib/libapr-1.so.0.2.11 b7827000 4K rw--- /usr/lib/libapr-1.so.0.2.11 b7828000 4K rw--- [ anon ] b7829000 100K r-x-- /usr/lib/libaprutil-1.so.0.2.11 b7842000 4K rw--- /usr/lib/libaprutil-1.so.0.2.11 b7843000 152K r-x-- /usr/lib/libpcre.so.3.12.1 b7869000 4K rw--- /usr/lib/libpcre.so.3.12.1 b786a000 4K r-x-- /usr/lib/apache2/modules/mod_authz_default.so b786b000 4K rw--- /usr/lib/apache2/modules/mod_authz_default.so b786c000 4K r-x-- /usr/lib/apache2/modules/mod_authn_file.so b786d000 4K rw--- /usr/lib/apache2/modules/mod_authn_file.so b786e000 24K r-x-- /usr/lib/apache2/modules/mod_auth_digest.so b7874000 4K rw--- /usr/lib/apache2/modules/mod_auth_digest.so b7875000 8K r-x-- /usr/lib/apache2/modules/mod_auth_basic.so b7877000 4K rw--- /usr/lib/apache2/modules/mod_auth_basic.so b7878000 8K r-x-- /usr/lib/apache2/modules/mod_alias.so b787a000 4K rw--- /usr/lib/apache2/modules/mod_alias.so b787b000 8K rw--- [ anon ] b787d000 4K r-x-- [ anon ] b787e000 104K r-x-- /lib/ld-2.7.so b7898000 8K rw--- /lib/ld-2.7.so bfd68000 76K rwx-- [ stack ] bfd7b000 8K rw--- [ anon ] total 119008K I have no idea what's going on. I've tried adjusting the usual parameters (MaxClients, MaxRequestsPerClient, etc, but those don't do anything.) Note, also, that this is memory usage on startup - it doesn't grow, it just starts like this and then stays more or less constant. Ideas?

    Read the article

  • Windows 7 XP mode missing driver virtual pc integration device

    - by Charlie Wu
    I have XP Mode installed fine, but the shipped Windows XP installation is too heavy. I tried to install a light XP using my ASUS Eee PC. After installation, I have to install a few missing drivers and everything worked OK except I can't copy and paste between host and guest operating system. I checked Device Manager and found the Virtual PC Integration Device is not installed correctly. I can't find any driver for that. How can I fix this problem?

    Read the article

  • Time Machine + Ubee Router?

    - by Charlie
    I can't for the life of me figure this out. I recently had TWC installed in my house, and wanted to disable the NAT and router functions of it. I have a Time Machine hooked up to it from LAN1 (on the Ubee) to the WAN port on the TM. The problems started occurring here. I figured the settings would be these: Ubee Configuration mode: Bridge DHCP: Off TM IPv4: 192.168.100.2 Subnet Mask: 255.255.255.0 Router Address: 192.168.100.1 DNS Servers: 8.8.8.8, 8.8.4.4 Router Mode: DHCP and NAT But using those settings, my TM says "Double NAT", so I have to change it all around to the default settings of the Ubee using NAT. This leads me to believe bridge mode doesn't actually turn off NAT...

    Read the article

  • Process runs slower as a scheduled task than it does interactively

    - by Charlie
    I have a scheduled task which is very CPU- and IO-intensive, and takes about four hours to run (building source code, if you're curious). The task is a Powershell script which spawns various sub-processes to do its work. When I run the same process interactively from a Powershell prompt, as the same user account, it runs in about two and a half hours. The task is running on Windows Server 2008 R2. What I want to know is why it takes so much longer to run as a scheduled task - more than an hour longer. One thing I noticed is that the task scheduler runs at Below-Normal priority, so when my task starts, it inherits the same lowered priority. However, I've updated the script to set the Powershell process priority back to Normal, and it still takes just as long. Anybody have an idea what could be different between the two scenarios? I've ruled out differences in processor and IO load - this task is the only thing the system is used for, so there's nothing else running that could be competing for resources.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >