Search Results

Search found 25440 results on 1018 pages for 'agent based modeling'.

Page 581/1018 | < Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >

  • Alternative to Daemontools (djbtools) to supervise unix processes?

    - by Stefan Lasiewski
    I've used Daemontools to provide a simple and reliable way to supervise Unix services on my servers. It works well, but it requires a different way of thinking (The DJB Way) and some common complaints are: TAI64N based timestamps Doesn't store scripts under /etc/init.d (or (/usr/local)/etc/rc.d) Doesn't always work with scripts like apachectl. Some scripts need to be rewritten. I remember that some similar "supervisor/watchdog" daemons were in the works about two years ago, but some were still a little rough around the edges. If you have switched from Daemontools to something else, what did you choose and did it work well for you? Does RedHat or Ubuntu come with any process supervisor utilities by default?

    Read the article

  • Where to find Oracle Training for BI & EPM Partners

    - by Mike.Hallett(at)Oracle-BI&EPM
    We run both “Live Virtual Training” (web-based classes) as well as “In Class Training” in most countries around Europe, Middle East and Africa. Some of these are subsidised for OPN partners, while others are available at a discount (usually 25%) to OPN partners via OU (Oracle University).  To see what is scheduled for in-depth hands-on implementation training for partners see:   Oracle Business Intelligence Enterprise Edition Plus Implementation Boot Camp For example, these are some of the OBI11g Boot-camps we currently have scheduled: 11 - 15 June 2012 Bucharest, Romania 21 - 23 August 2012 Johannesburg, South Africa 24 - 28 September 2012 Utrecht, Netherlands Oracle Essbase Implementation Boot Camp Oracle GoldenGate Implementation Boot Camp Hyperion Planning Boot Camp   Hyperion Financial Management Boot Camp   Oracle Business Intelligence Applications for ERP Boot Camp     You can also selectively filter search for courses via the Partner Events Calendar @ http://events.oracle.com/search/search?group=Events&keyword=OPN+Only   Otherwise, it is worth checking the Oracle Partner Enablement BLOG for any BI / EPM news, especially the sub-Blogs on the right for each country.   There is also a monthly Partner Enablement Update (PDF) to find out the latest partner training on Oracle's new products and new releases.

    Read the article

  • Handy Tool for Code Cleanup: Automated Class Element Reordering

    - by Geertjan
    You're working on an application and this thought occurs to you: "Wouldn't it be cool if I could define rules specifying that all static members, initializers, and fields should always be at the top of the class? And then, whenever I wanted to, I'd start off a process that would actually do the reordering for me, moving class elements around, based on the rules I had defined, automatically, across one or more classes or packages or even complete code bases, all at the same time?" Well, here you go: That's where you can set rules for the ordering of your class members. A new hint (i.e., new in NetBeans IDE 7.3), which you need to enable yourself because by default it is disabled, let's the IDE show a hint in the Java Editor whenever there's code that isn't ordered according to the rules you defined: The first element in a file that the Java Editor identifies as not matching your rules gets a lightbulb hint shown in the left sidebar: Then, when you click the lightbulb, automatically the file is reordered according to your defined rules. However, it's not much fun going through each file individually to fix class elements as shown above. For that reason, you can go to "Refactor | Inspect and Transform". There, in the "Inspect and Transform" dialog, you can choose the hint shown above and then specify that you'd like it to be applied to a scope of your choice, which could be a file, a package, a project, combinations of these, or all of the open projects, as shown below: Then, when Inspect is clicked, the Refactoring window shows all the members that are ordered in ways that don't conform to your rules: Click "Do Refactoring" above and, in one fell swoop, all the class elements within the selected scope are ordered according to your rules.

    Read the article

  • Sharing swap space between Windows and Ubuntu

    - by Leftium
    This Linux Swap Space Mini-HOWTO describes how to share swap space between Windows and Linux. Do these instructions still apply to Ubuntu in 2011? How should I modify the steps for Ubuntu? Is there a better approach to sharing swap space? Based on the HOWTO, it seems best to create a dedicated NTFS swap partition: Dedicated so the swap file will be contiguous and remain unfragmented. NTFS so both Windows and Ubuntu can read/write to it. (Or is FAT32 better for this purpose?) Then, configure Ubuntu to prepare the swap space for use by Linux on start up; by Windows on shut down. I want to dual boot Ubuntu and Windows 7 on my X301 laptop. However, my laptop only has a 64 GB SDD, so I would like to conserve as much disk space as possible. update: There is an alternate method using a special driver for Windows that let you use a Linux swap partition for temporary storage like a RAM-disk, but it doesn't seem to be as good...

    Read the article

  • Demantra 7.3.1.3 Controlling MDP_MATRIX Combinations Assigned to Forecasting Tasks Using TargetTaskSize

    - by user702295
    New 7.3.1.3 parameter: TargetTaskSize Old parameter: BranchID  Multiple, deprecated  7.3.1.3 onwards Parameter Location: Parameters > System Parameters > Engine > Proport   Default: 0   Engine Mode: Both   Details: Specifies how many MDP_MATRIX combinations the analytical engine attempts to assign to each forecasting task.  Allocation will be affected by forecsat tree branch size.  TaskTargetSize is automcatically calculated.  It holds the perferred branch size, in number of combinations in the lowest level. This parameter is adjusted to a lower value for smaller schemas, depending on the number of available engines.   - As the forecast is generated the engine goes up the tree using max_fore_level and not top_level -1.  Max_fore_level has     to be less than or equal to top_level -1.  Due to this requirement, combinations falling under the same top level -1     member must be in the same task.  A member of the top level -1 of the forecast tree is known as a branch.  An engine     task is therefore comprised of one or more branches.     - Reveal current task size       go to Engine Administrator --> View --> Branch Information and run the application on your Demantra schema.  This will be deprecated in 7.3.1.3 since there is no longer a means of adjusting the brach size directly.  The focus is now on proper hierarchy / forecast design.     - Control of tasks       The number of tasks created is the lowest of number of branches, as defined by top level -1 members in forecast       tree, and engine sessions and the value of TargetTaskSize.  You are used to using the branch multiplier in this       calculation.  As of 7.3.1.3, the branch ID multiple is deprecated.     - Discovery of current branch size       To resolve this you must review the 2nd highest level in the forecast tree (below highest/highest) as this is the       level which determines the size of the branches.  If a few resulting tasks are too large it is recommended that       the forecast tree level driving branches be revised or at times completely removed from the forecast tree.     - Control of foreacast tree branch size         - Run the following sql to determine how even the branches are being split by the engine:             select count(*),branch_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by branch_id;             This will give you an understanding if some of the individual branches have an unusually large number of           rows and thus might indicate that the engine is not efficiently dividing up the parallel tasks.         - Based on the results of this sql, we may want to adjust the branch id multiplier and/or the number of engines           (both of these settings are found in the Engine Administrator)           select count(*), level_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by level_id;           This will give us an understanding at which level of the Forecast tree where the forecast is being generated.            Having a majority of combinations higher on the forecast tree might indicate either a poorly designed forecast           tree and/or engine parameters that are too strict           Based on the results of this we would adjust the Forecast Tree to see if choosing a different hierarchy might           produce a forecast, with more combinations, at a lower level.           For example:             - Review the 2nd highest level in the forecast tree, below highest/highest, as this is the level which               determines the size of the branches.             - If a few resulting tasks are too large it is recommended that the forecast tree level driving branches               be revised or at times completely removed from the forecast tree.               - For example, if the highest level of the forecast tree is set to Brand/All Locations.             - You have 10 brands but 2 of the brands account for 67% and 29% of all combinations.             - There is a distinct possibility that the tasks resulting from these 2 branches will be too large for               a single engine to process.  Some possible solutions could be to remove the Brand level and instead               use a different product grouping which has a more even distribution, possibly Product Group.               - It is also possible to add a location dimension to this forecast tree level, for example Customer.                This will also reduce forecast tree branch size and will deliver a balanced task allocation.             - A correctly configured Forecast Tree is something that is done by the Implementation team and is               not the responsibility of Oracle Support.  Allocation will be affected by forecast tree branch size.  When TargetTaskSize is set to 0, the default value, the system automatically calculates a value for 'TargetTaskSize' depending on the number of engines.   - QUESTION:  Does this mean that if TargetTaskSize is 1, we use tree branch size to allocate branches to tasks instead                of automatically calculating the size?     ANSWER: DEV Strongly recommends that the setting of TargetTaskSize remain at the DEFAULT of ZERO (0).   - How to control the number of engines?     Determine how many CPUs are on the machine(s) that is (are) running the engine.  As mentioned earlier, the general     rule is that you should designate 2 engines per each CPU that is available.  So for example, if you are running the     engine on a machine that has 4 CPU then you can have up to 8 engines designated in the Engine Administrator.  In this     type of architecture then instead of having one 'localhost' in your Engine Settings Screen, you would have 'localhost'     repeated eight times in this field.     Where do I set the number of engines?                 To add multiples computers where engine will run, please do a back-up of Settings.xml file under         Analytical Engines\bin\ folder, then edit it and add there the selected machines.                 Example, this will allow 3 engines to start:         - <Entry>           <Key argument="ComputerNames" />           <Value type="string" argument="localhost,localhost,localhost" />           </Entry Otherwise, if there are no additional engines defined, the calculated value of 'TargetTaskSize' is used. (Oracle does not recommend changing the default value.) The TargetTaskSize holds the engines prefered branch size, in number of level 1 combinations.   - Level 1 combinations, known as group size The engine manager will use this parameter to attempt creating branches with similar size.   * The engine manager will not create engines that do not have a branch. The engine divider algorithm uses the value of 'TargetTaskSize' as a system-preferred branch size to create branches that are more equal in size which improves engine performance.  The engine divider will try to add as many tasks as possible to an existing branch, up to the limit of 'TargetTaskSize' level 1 combinations, before adding new branches. Coming up next: - The engine divider - Group size - Level 1 combinations - MAX_FORE_LEVEL - Engine Parameters  

    Read the article

  • Eventtriggers frequence

    - by holian
    Masters, I try to set some event task on windows server 2003. I use this tutorial: http://www.petri.co.il/how-to-use-eventtriggersexe-to-send-e-mail-based-on-event-ids.htm My problem is when i set an event for example "If Event Id 528 in the security log, than send an e-mail.", then the eventrigger fire up the task continously, and i get the mail over and over. Any suggestion how to set eventriggers.exe to send e-mail once after the event occure in the event log? Thank you.

    Read the article

  • QuickTime X incorrect aspect ratio for H.264 video

    - by Adam Robinson
    I'm running Snow Leopard and have a serious issue with QuickTime X. I have a Samsung HMX-H100N/XAA camcorder that records H.264 video in either 720p or 1080i. In either of these resolutions, QuickTime X (and, by extension, all QuickTime-associated applications like FCP, iMovie, etc.) displays an incorrect aspect ratio for all video produced by this camcorder. For example, 720p video is reported as being 1280x720 in the movie inspector (which is normal), but the displayed size is always at an aspect ratio of something like 63:20 (never heard of such a ratio) with sizes like 1700x539. If I open the video in QuickTime 7 player on the same computer, it is displayed correctly. If I process the video through something like MPEG Streamclip to transcode it, it displays correctly. As it stands right now I have to transcode all of my video in order to use it in any iLive (or other QT-based application) unless I want it to look ridiculous. I've tried installing Perian, but that seemed to have no effect.

    Read the article

  • Tmux causes Emacs glitch

    - by killy9999
    Recently I started using Tmux, but I noticed that it causes a strange Emacs glitch. When I open source code for elisp or haskell, the comments aren't highlighted. Only the comment sign is (; in case of elisp, -- in case of haskell). The rest of the commented line is in normal colour. When I run Emacs outside of Tmux everything works as expected - the whole commented line is highlighted in a colour denoting a comment. Any ideas why this is happening? SOLUTION: Based on Stefan's comment I added this to my .emacs file: (custom-set-variables (custom-set-faces '(font-lock-comment-face ((((class color) (min-colors 8) (background dark)) (:foreground "red")))))) Now the comments are displayed in red, just like comment delimiters.

    Read the article

  • FTP client that supports 2 concurrent FTP sessions

    - by oninea
    I'm looking for an FTP client that can connect to two different FTP servers at the same time and allow file transfer or synchronization between those two servers. Basically what I want to achieve is to transfer/synchronize files between 2 different sites from my local machine. Are there any clients around that support this functionality? If there are none, is there an alternative to achieve this? I've taken a look at net2ftp, a web based FTP client, which provides almost the same functionality that I need. What I'm looking for though is a desktop app. Any ideas?

    Read the article

  • Handling early/late/dropped packets for interpolation in a 3D multiplayer game

    - by Ben Cracknell
    I'm working on a multiplayer game that for the purposes of this question, is most similar to Team Fortress. Each network data packet will contain the 3D position of the target moving object. (this object could be another player) The packets are sent on a fixed interval, and linear interpolation will be used to smooth the transition between packets. Under normal circumstances, interpolation will occur between the second-to-last packet, and the last packet received. The linear interpolation algorithm is the same as this post: Interpolating positions in a multiplayer game I have the same issue as in that post, but the answers don't seem like they will work in my situation. Consider the following scenario: Normal packet timing, everything is okay The next expected packet is late. That's okay, we'll just extrapolate based on previous positions The late packet eventually arrives with corrections to our extrapolation. Now what do we do with its information? The answers on the above post suggest we should just interpolate to this new packet's position, but that would not work at all. If we have already extrapolated past that point in time, moving back would cause rubber-banding. The issue is similar in the case of an early or dropped packet. So I believe what I am looking for is some way to smoothly deal with new information in an ongoing interpolation/extrapolation process. Since I might be moving on to quadratic or even cubic interpolation, it would be great if the same solutiuon could be applied to those as well.

    Read the article

  • Know which Apps to Remove From MSConfig with this Startup Applications List

    - by Mit Naik
    Just found userfull information on Internet and thought it would help all of users. This list of startup applications is a really handy resource for cleaning up msconfig entries that have overtaken old computers. It catalogs tons of different startup programs, what they do, and which ones you should delete, leave running, or decide based on the program's usefulness. It even has a nice search box so you can search through the tens of thousands of entries. Hit the link below to check it out, and if your relatives' computer is especially broken, be sure to check out our guide to fixing your relatives' terrible computer. http://www.sysinfo.org/startuplist.php Please update the list here if you got any other tools or sites which can be help full to others

    Read the article

  • Most efficient way to handle coordinate maps in Java

    - by glowcoder
    I have a rectangular tile-based layout. It's your typical Cartesian system. I would like to have a single class that handles two lookup styles Get me the set of players at position X,Y Get me the position of player with key K My current implementation is this: class CoordinateMap<V> { Map<Long,Set<V>> coords2value; Map<V,Long> value2coords; // convert (int x, int y) to long key - this is tested, works for all values -1bil to +1bil // My map will NOT require more than 1 bil tiles from the origin :) private Long keyFor(int x, int y) { int kx = x + 1000000000; int ky = y + 1000000000; return (long)kx | (long)ky << 32; } // extract the x and y from the keys private int[] coordsFor(long k) { int x = (int)(k & 0xFFFFFFFF) - 1000000000; int y = (int)((k >>> 32) & 0xFFFFFFFF) - 1000000000; return new int[] { x,y }; } } From there, I proceed to have other methods that manipulate or access the two maps accordingly. My question is... is there a better way to do this? Sure, I've tested my class and it works fine. And sure, something inside tells me if I want to reference the data by two different keys, I need two different maps. But I can also bet I'm not the first to run into this scenario. Thanks!

    Read the article

  • configuring USB modem( Huawei EC156) in ubuntu 13.10

    - by user205427
    I am facing difficulty in installing my USB modem in Ubuntu 13.10. Contrary to what many have suggested,it does not get detected automatically, nor does setting a new connection help. USB Device is listed in lsusb, but not under network manager or Devices, it is detected as a CD-ROM, what I understood from the web was that usb-modeswitch can be used to switch it to a USB device. Even 'Enable Mobile Broadband' option is not shown in network manager. What was interesting is when I start laptop with windows 7 and use the USB modem and after that restart with Ubuntu, both Enable Broadband and the mobile broadband connection can be seen. Sadly, internet connection could not be installed. I tried using USB-modeswitch command as suggested somewhere, but it does not seem to work. Following is the message. Take all parameters from the command line * usb_modeswitch: handle USB devices with multiple modes * Version 2.0.1 (C) Josua Dietze 2013 * Based on libusb1/libusbx ! PLEASE REPORT NEW CONFIGURATIONS ! DefaultVendor= 0x12d1 DefaultProduct= 0x1505 HuaweiMode=1 NeedResponse=0 InquireDevice enabled (default) Look for default devices ... found USB ID 8087:0020 found USB ID 1d6b:0002 found USB ID 0461:4db6 found USB ID 12d1:1505 vendor ID matched product ID matched found USB ID 138a:0007 found USB ID 03f0:231d found USB ID 8087:0020 found USB ID 1d6b:0002 Found devices in default mode (1) Access device 005 on bus 001 Get the current device configuration ... OK, got current device configuration (1) Use interface number 0 Use endpoints 0x08 (out) and 0x87 (in) Inquire device details; driver will be detached ... Looking for active driver ... OK, driver detached INQUIRY message failed (error -9) USB description data (for identification) ------------------------- Manufacturer: HUA?WEI TECHNOLOGIES Product: HUAWEI Mobile Serial No.: ??????????????????? ------------------------- Send old Huawei control message ... -> Run lsusb to note any changes. Bye! I am stuck with this problem for 4 days now, any help would be appreciated

    Read the article

  • Direct IO enhancements in OVM Server for SPARC 2.2(a.k.a LDoms2.2)

    - by user12611315
    The Direct I/O feature has been available for LDoms customers since LDoms2.0. Apart from the latest SR-IOV feature in LDoms2.2, it is worth noting a few enhancements to the Direct I/O feature. These are: Support for Metis-Q and Metis-E cards. These cards are highly requested for support and are worth mentioning because they are the only combo cards containing both FibreChannel and Ethernet in the same card. With this support, a customer can have both SAN storage and network access with just one card and one PCIe slot assigned to a logical domain. This reduces cost and helps when there are less number of slots in a given platform. The following are the part numbers for these cards. I have tried to put the platforms on which each card is supported, but this information can get quickly outdated. The accurate information can be found at the Support Document.  Card Name  Part Number  Platforms Metis-Q: StorageTek Dual 8Gb Fibre Channel Dual GbE ExpressModule HBA, QLogic SG-XPCIEFCGBE-Q8-N  SPARC T3-4, T4-4 Metis-E: StorageTek Dual 8Gb Fibre Chanel Dual GbE ExpressModule HBA, Emulex SG-XPCIEFCGBE-E8-N SPARC T3-4, T4-4  Additional cards added to the portfolio of supported cards. This is mainly Powerville based Ethernet cards, the part numbers for these cards as below:  Part Number  Description  7100477 Sun Quad Port GbE PCI Express 2.0 Low Profile Adapter, UTP  7100481 Sun Dual Port GbE PCI Express 2.0 Low Profile Adapter, MMF  7100483 Sun Quad Port GbE PCI Express 2.0 ExpressModule, UTP  7110486 Sun Quad Port GbE PCI Express 2.0 ExpressModule, MMF    Note:  Direct IO feature has a hard dependency on the Root domain(PCIe bus owner, here Primary domain). That is, rebooting the Root domain for any reason may impact the logical domains having PCIe slots assigned with Direct IO feature. So rebooting a root domain need to be carefully managed. Also apply the failure-policy settings as described in the admin guide and release notes to deal with unexpected cases.

    Read the article

  • How can I calculate a vertex normal for a hard edge?

    - by K.G.
    Here is a picture of a lovely polygon: Circled is a vertex, and numbered are its adjacent faces. I have calculated the normals of those faces as such (not yet normalized, 0-indexed): Vertex 1 normal 0: 0.000000 0.000000 -0.250000 Vertex 1 normal 1: 0.000000 0.000000 -0.250000 Vertex 1 normal 2: -0.250000 0.000000 0.000000 Vertex 1 normal 3: -0.250000 0.000000 0.000000 Vertex 1 normal 4: 0.250000 0.000000 0.000000 What I'm wondering is, how can I determine, taken as given that I want this vertex to represent a hard edge, whether its normal should be the normal of 1/2 or 3/4? My plan after I glanced at the sketch I used to put this together was "Ha! I'll just use whichever two faces have the same normal!" and now I see that there are two sets of two faces for which this is true. Is there a rule I can apply based on the face winding, angle of the adjacent edges, moon phase, coin flip, to consistently choose a normal direction for this box? For the record, all of the other polygons I plan to use will have their normals dictated in Maya, but after encountering this problem, it made me really curious.

    Read the article

  • Fortigate 200A firewall CPU high resource usage

    - by user119720
    This morning I'm receiving complaints from several end users that saying their whole department network are slow and have intermittent. Therefore I've checked our firewall to see whether if something goes wrong with the device.From my observation in the FortiGate dashboard status, the CPU resources is very high (99 percent). My first assumption is to clear the log since in the alert log the Fortigate log mention that it is already 90% full.Based on my understanding,the log can be cleared by restarting the firewall. After restarting the firewall the network seems okay again but then after several minutes it went up again.The condition still persist until now. Can someone show me where else I can check to fix this issue? I've really appreciate any help that I can get here.Thanks. Edit: diag sys top command

    Read the article

  • Interpreting Munin graphs showing available entropy and MySQL slow queries in sync

    - by user64204
    We're experiencing performance issues on our website, and after reviewing our munin graphs, the only metrics we've found in sync are Available entropy and MySQL slow queries, with the latter influenced by our number of logged in users: Based on the wikipedia entropy page, my understanding is that entropy is the amount of randomness (here measured in bytes) that the system can use for various tasks, mainly cryptography and functions that require random input. Since the peaks in available entropy and MySQL slow queries are occurring in sync and at regular interval, that the number of MySQL slow queries is proportional to our number of Drupal users whereas the peaks in available entropy seem to be much more constant and less proportional to these 2 metrics, we're thinking available entropy is the reflect of a root cause which, combined with the traffic to our website, is causing those slow queries (and not the opposite, slow queries influencing the entropy). Accordingly: Q: What underlying problem do you think could cause regular peaks in available entropy that could have an influence on MySQL's ability to process queries?

    Read the article

  • Referrer Strings

    - by WernerCD
    I'm not exactly sure how to ask this, but here I go. I work for a company that provides IT services. Our main costumer has a website developed by WebCompany.com, and I act as a maintainer/middle-man for this website for our customer. What's happening is that on the front page there is a drop-down-menu with links. One link goes to a third party website - ThirdParty.com. This third party website automatically logs you in based on your IP or your referrer string. Homepage ThirdParty.com Logged in via IP or referrer string. What's happening... is Website.com is passing along inconsistent referrer strings. I've asked them to look into it and they say it's not their fault, referrer strings are handled at the browser. ThirdParty.com is actually nice enough to have IP and referrer string listed on the front page for users not logged in. So... how can I trace the referrer string so that I can figure out why http://Website.com's front page link to ThirdParty.com is showing a referrer as "None" or "Thirdparty.com" instead of "ThirdParty.com"? Their website is build in PHP. Can you force all links on a website to list "referrer" as "http://website.com/"? instead of "http://website.com/portal/1" or "http://website.com/page3"?

    Read the article

  • Profit : August, 2012

    - by user462779
    August 2012 issue of Profit is now available online. Way back in 2003, I wrote my first feature for Profit. It was titled “Everything You Always Wanted to Know About Application Servers (But Were Afraid To Ask),” and it discussed “cutting-edge” technologies like portals and XML and the brand-new Java Platform, Enterprise Edition (Java EE; we’re now on Java EE 7). But despite the dated terms I used in my Profit debut, I noticed something in rereading that old story that has stayed constant: mid-tier technology is where innovative enterprise IT projects happen. It may have been XML in 2003, but it’s SOA in 2012. While preparing the August issue of Profit was more than just a stroll down memory lane for me, it has provided a nice bit of perspective about what changes and what doesn’t in this dynamic IT industry. Technologies continuously evolve—some become standard practice, some are revived or reinvented, and some are left by the wayside. But the drive to innovate and the desire to succeed are business principles that never go out of fashion. Also, be sure to check out the Profit JD Edwards Special Issue 2012 (PDF), featuring partner profiles, customer successes, and Oracle executive interviews. The Middleware Advantage Three ways a flexible, integrate software layer can deliver a competitive edge Playing to Win Electronic Arts’ superefficient hub processes millions of online gaming transactions every day. Adjustable Loans With Oracle Exadata, Reliance Commercial Finance keeps pace with India’s commercial loan market. Future Proof To keep pace with mobile, social, and location-based services, smart technologists are using middleware to innovate. Spring Training Knowledge and communication help Jackson Hewitt’s Tim Bechtold get seasonal workers in top shape. Keeping Online Customers Happy Customers worldwide are comfortable with online service—but are companies meeting customers’ needs?

    Read the article

  • How can I use `SetEnvIf` to clear an Apache2 environment variable?

    - by Jamie
    In my apache2 configuration I've got these lines: SetEnv log_everything # Create the environment variables based on access requests SetEnvIf Request_URI "^/orders/.*$" download_access !log_everything SetEnvIf Request_URI "^/download/.*$" download_access !log_everything SetEnvIf Request_URI "^/wg/.*$" wg_1x1_access !log_everything # Log the accesses using the generated environment variable as conditionals. CustomLog ${APACHE_LOG_DIR}/download.log combined env=download_access CustomLog ${APACHE_LOG_DIR}/wg.log combined env=wg_1x1_access RewriteEngine on RewriteRule "^/wg/.+$" "/wg/1x1.gif" ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined env=log_everything Which currently logs all the "download" and "orders" requests to "download.log" and "wg" requests to "wg.log", but everything is also going to access.log. How can I configure this so that "wg" and "download/orders" requests won't be duplicated in access.log?

    Read the article

  • Piping powershell messages to Write-EventLog

    - by Richard
    I have a powershell script that runs a custom cmdlet. It is run by Task Scheduler and I want to log what it does. This is my current crude version: Add-PsSnapIn PianolaCmdlets Write-EventLog -LogName "Windows Powershell" -Source "Powershell" -Message "Starting Update-EbuNumbers" -EventId 0 Get-ClubMembers -HasTemporaryEbuNumber -show all | Update-EbuNumbers -Verbose Write-EventLog -LogName "Windows Powershell" -Source "Powershell" -Message "Finished Update-EbuNumbers" -EventId 0 What I would like to do is log the output of my custom cmdlet. Ideally I'd like to create different types of event log entries based on whether it was a warning or a verbose message. Update: I don't want to log the return value of the commandlet. The Update-EbuMembers cmdlet does not return an object. I want to log any verbose messages written by WriteVerbose and I want to log errors created by ThrowTerminatingError.

    Read the article

  • Making files generally available on Linux system (when security is relatively unimportant)?

    - by Ole Thomsen Buus
    Hi, I am using Ubuntu 9.10 on a stationary PC. I have a secondary 1 TB harddrive with a single big logical partition (currently formatted as ext4). It is mounted as /usr3 with options user, exec in /etc/fstab. I am doing highspeed imaging experiments. Well, only 260fps, but that still creates many individual files since each frames is saved as one png-file. The stationary is not used by anyone other than me which is why the default security model posed by ubuntu is not necessary. What is the best way to make the entire contents of /usr3 generally available on all systems. In case I need to move the harddrive to another Ubuntu 9.x or 10.x machine? When grabbing image with the firewire camera I use a selfmade grabbing software-utility (console based) in sudo-mode. This creates all files with root as owner and group. I am logged in as user otb and usually I do the following when having to make files generally available to otb: sudo chown otb -R * sudo chgrp otb -R * sudo chmod a=rwx -R * This takes some time since the disk now contains individual ~200000 files. After this, how would linux behave if I moved the harddrive to another system where the user otb is also available? Would the files still be accessible without sudo use?

    Read the article

  • Is micro-optimisation important when coding?

    - by BozKay
    I recently asked a question on stackoverflow.com to find out why isset() was faster than strlen() in php. This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering. My father is a retired programmer, I showed him the responses and he was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers. I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of php in the above case). The environmental factors could be important - the internet consumes 10% of the worlds energy, I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites? I'd like to know answers preferably based on facts about programming. Is micro-optimisation important when coding? EDIT : My personal summary of 25 answers, thanks to all. Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as if (expensiveFunction() && counter < X) Should be if (counter < X && expensiveFunction()) (example from @zidarsk8) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to because you would write it correctly in the first place.

    Read the article

  • Best filesystem choices for NFS storing VMware disk images

    - by mlambie
    Currently we use an iSCSI SAN as storage for several VMware ESXi servers. I am investigating the use of an NFS target on a Linux server for additional virtual machines. I am also open to the idea of using an alternative operating system (like OpenSolaris) if it will provide significant advantages. What Linux-based filesystem favours very large contiguous files (like VMware's disk images)? Alternatively, how have people found ZFS on OpenSolaris for this kind of workload? (This question was originally asked on SuperUser; feel free to migrate answers here if you know how).

    Read the article

  • FTP Synchronization software for Mac or PC

    - by evanmcd
    Hi, I've been using FTP Synchronizer for awhile and generally have had pretty good results with it. But, I've just moved to a Mac full-time (at work as well as home now) so want to get a native client if I can. I've tried the only one that I've found - SuperFlexibleSynchronizer - but it crashed every time I loaded up an FTP to FTP synch attempt. The most important features to me are: 1) ability to synch with a large number of files (thousands), as I generally work on sites with large number of files. 2) FTP to FTP synch. This would be very helpful as I work with some CMS based sites for which users upload files while on staging and don't want to move files locally first before moving live. Thanks! Evan

    Read the article

< Previous Page | 577 578 579 580 581 582 583 584 585 586 587 588  | Next Page >