Search Results

Search found 3661 results on 147 pages for 'timer jobs'.

Page 65/147 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • SEO Tools Vs Human Power - Can SEO Be Automated?

    After some 10 years of its existence, SEO is not only deeply rooted in our internet-marketing life, but is even claimed to go automated. Hundreds of SEO tools to facilitate your website promotion jobs have flooded the market, and some of them, as their developers try to convince you, optimize your website absolutely on autopilot. But can such tools really eliminate the need for manpower? Is automated SEO a myth or reality?

    Read the article

  • What is Site Rubix? Find the Answer Here!

    Online Marketing has really taken off lately, and that has come as no surprise. Many people who currently work "for the man" have found that man to be untrustworthy, as so many have lost their jobs through cut backs and downsizing.

    Read the article

  • Which of these studies would benefit a CS student the most? [closed]

    - by user1265125
    Which of these extra-curricular studies would benefit a CS student the most? Algorithms Advanced OS programming Image processing Computer graphics Open source development Practicing on TopCoder or Codechef Something else? I realize the decision can be influenced by a number of factors, such as personal preference, what's currently hot in the jobs market, and what is likely to be in demand more in the future, however I would like to ask more experienced programmers which one(s) of these would be most beneficial to learn alongside all the required CS academics.

    Read the article

  • 3 Tools to Automate Your SEO Efforts

    When it boils down to it, a lot of SEO is pretty autonomous and mundane. We all know that we need links to get the high rankings but the thought of having to spend hours submitting to directories is enough to make me find a million and one other jobs - even going as far as calling the in-laws! You can outsource the submission process but it's become debatable just how effective these submissions are, you could be throwing the money away.

    Read the article

  • Sql Table Refactoring Challenge

    Ive been working a bit on cleaning up a large table to make it more efficient.  I pretty much know what I need to do at this point, but I figured Id offer up a challenge for my readers, to see if they can catch everything I have as well as to see if Ive missed anything.  So to that end, I give you my table: CREATE TABLE [dbo].[lq_ActivityLog]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [PlacementID] [int] NOT NULL, [CreativeID] [int] NOT NULL, [PublisherID] [int] NOT NULL, [CountryCode] [nvarchar](10) NOT NULL, [RequestedZoneID] [int] NOT NULL, [AboveFold] [int] NOT NULL, [Period] [datetime] NOT NULL, [Clicks] [int] NOT NULL, [Impressions] [int] NOT NULL, CONSTRAINT [PK_lq_ActivityLog2] PRIMARY KEY CLUSTERED ( [Period] ASC, [PlacementID] ASC, [CreativeID] ASC, [PublisherID] ASC, [RequestedZoneID] ASC, [AboveFold] ASC, [CountryCode] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] And now some assumptions and additional information: The table has 200,000,000 rows currently PlacementID ranges from 1 to 5000 and should support at least 50,000 CreativeID ranges from 1 to 5000 and should support at least 50,000 PublisherID ranges from 1 to 500 and should support at least 50,000 CountryCode is a 2-character ISO standard (e.g. US) and there is a country table with an integer ID already.  There are < 300 rows. RequestedZoneID ranges from 1 to 100 and should support at least 50,000 AboveFold has values of 1, 0, or 1 only. Period is a date (no time). Clicks range from 0 to 5000. Impressions range from 0 to 5000000. The table is currently write-mostly.  Its primary purpose is to log advertising activity as quickly as possible.  Nothing in the rest of the system reads from it except for batch jobs that pull the data into summary tables. Heres the current information on the database tables size: Design Goals This table has been in use for about 5 years and has performed very well during that time.  The only complaints we have are that it is quite large and also there are occasionally timeouts for queries that reference it, particularly when batch jobs are pulling data from it.  Any changes should be made with an eye toward keeping write performance optimal  while trying to reduce space and improve read performance / eliminate timeouts during read operations. Refactor There are, I suggest to you, some glaringly obvious optimizations that can be made to this table.  And Im sure there are some ninja tweaks known to SQL gurus that would be a big help as well.  Ill post my own suggested changes in a follow-up post for now feel free to comment with your suggestions. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Building an E-Commerce Website? 7 Tips For Finding Your Ideal Technology Teacher

    Like many adults, until recently, I'd never contemplated building a website and learning online technology to create an e-business. Still, the growth of the world-wide-web and my location in an area where good-paying jobs are hard to find forced me to re-evaluate whether I could master enough website technology to build a blog and do business online. The one thing I did know was that a good teacher or consultant could make the difference between success and failure.

    Read the article

  • The Appalling Reaction to the Apple iPhone Leak

    <b>ABC News:</b> "The contempt that Apple (and Steve Jobs in particular) holds toward the media -- and its willingness to manipulate the press for its own ends -- should have produced a media backlash. There should be inside-Apple scoops in the press every week as intrepid reporters go over, under and around every arbitrary barrier Apple puts in front of them."

    Read the article

  • Managing Printers with Group Policy, PowerShell, and Print Management

    Just because it is possible to do many configuration jobs 'click by bleeding click', doesn't mean that it is a good idea. It is better to step back, plan, and use the advanced resources provided for managing large network. Printer configuration is the perfect illustration of this, and Joseph demonstrates how the use of Group Policy, PowerShell, and Print Management can turn a time-consuming chore into a pleasure.

    Read the article

  • How does a programmer without a degree gain experience? [on hold]

    - by user96872
    Having a few years of experience is a must for many programming jobs nowadays. If one does not have a college degree but would like to get some experience with programming (with some prior knowledge, say, in JavaScript, PHP and Python), what are some ways to gain the experience that employers seek? I know about personal projects, but how about team experience and everything that goes along with it? Would I need to volunteer somewhere?

    Read the article

  • Reasons to Use a Work Order Software System

    Keeping track of the bottom line has never been more critical than in these trying economic times. That';s why many companies are choosing to use a work order system to better keep track of jobs and m... [Author: Belinda Verducci - Computers and Internet - June 05, 2010]

    Read the article

  • The Lasting Future of Search Engine Optimization

    The search engine optimization has surely a long lasting future these days. The worth of organic SEO has been increased due to its exclusive techniques involving On Page Optimization and Off Page Optimization. That is why you could find most of the lucrative SEO jobs on the internet nowadays.

    Read the article

  • CNC Information - Data Storage and Transfer

    A CNC machine must be tried when there is need to improve speed and accuracy. The machine performs better in doing repetitive tasks and getting large jobs done quicker. Woodworking shops or industria... [Author: Scheygen Smith - Computers and Internet - March 21, 2010]

    Read the article

  • Pourquoi les entreprises passent-elles par des cabinets de recrutement lorsqu'elles ont besoin de développeurs ? Pour quels intérêts ?

    Pourquoi les boites ne publient pas des annonces détaillées des jobs de développeur qu'elles proposent au lieu de passer par des cabinets de recrutement ou même des SSII alors que le projet dure plus d'un an ?Je suis convaincu que dans 99% des cas c'est ridicule de passer par un cabinet de recrutement (déjà ça leur coûte au moins 1500 euros à payer quand le nouvel employé a terminé sa période d'essai) et que le principal effet en plus de la perte nette sur salaire (c'est 1500 euros de moins sur...

    Read the article

  • Component Activities of SEO!

    Every website is designed for a purpose, that remains unsolved without a good visibility among the search results. SEO professionals sitting in SEO companies do such jobs which can improve the ranking and thus improve the visibility.

    Read the article

  • No audio with headphones, but audio works with integrated speakers

    - by Pedro
    My speakers work correctly, but when I plug in my headphones, they don't work. I am running Ubuntu 10.04. My audio card is Realtek ALC259 My laptop model is a HP G62t a10em In another thread someone fixed a similar issue (headphones work, speakers not) folowing this: sudo vi /etc/modprobe.d/alsa-base.conf (or some other editor instead of Vi) Append the following at the end of the file: alias snd-card-0 snd-hda-intel options snd-hda-intel model=auto Reboot but it doesnt work for me. Before making and changes to alsa, this was the output: alsamixer gives me this: Things I did: followed this HowTo but now no hardware seems to be present (before, there were 2 items listed): Now, alsamixer gives me this: alsamixer: relocation error: alsamixer: symbol snd_mixer_get_hctl, version ALSA_0.9 not defined in file libasound.so.2 with link time reference I guess there was and error in the alsa-driver install so I began reinstalling it. cd alsa-driver* //this works fine// sudo ./configure --with-cards=hda-intel --with-kernel=/usr/src/linux-headers-$(uname -r) //this works fine// sudo make //this doesn't work. see ouput error below// sudo make install Final lines of sudo make: hpetimer.c: In function ‘snd_hpet_open’: hpetimer.c:41: warning: implicit declaration of function ‘hpet_register’ hpetimer.c:44: warning: implicit declaration of function ‘hpet_control’ hpetimer.c:44: error: expected expression before ‘unsigned’ hpetimer.c: In function ‘snd_hpet_close’: hpetimer.c:51: warning: implicit declaration of function ‘hpet_unregister’ hpetimer.c:52: error: invalid use of undefined type ‘struct hpet_task’ hpetimer.c: In function ‘hpetimer_init’: hpetimer.c:88: error: ‘EINVAL’ undeclared (first use in this function) hpetimer.c:99: error: invalid use of undefined type ‘struct hpet_task’ hpetimer.c:100: error: invalid use of undefined type ‘struct hpet_task’ hpetimer.c: At top level: hpetimer.c:121: warning: excess elements in struct initializer hpetimer.c:121: warning: (near initialization for ‘__param_frequency’) hpetimer.c:121: warning: excess elements in struct initializer hpetimer.c:121: warning: (near initialization for ‘__param_frequency’) hpetimer.c:121: warning: excess elements in struct initializer hpetimer.c:121: warning: (near initialization for ‘__param_frequency’) hpetimer.c:121: warning: excess elements in struct initializer hpetimer.c:121: warning: (near initialization for ‘__param_frequency’) hpetimer.c:121: error: extra brace group at end of initializer hpetimer.c:121: error: (near initialization for ‘__param_frequency’) hpetimer.c:121: warning: excess elements in struct initializer hpetimer.c:121: warning: (near initialization for ‘__param_frequency’) make[1]: *** [hpetimer.o] Error 1 make[1]: Leaving directory `/usr/src/alsa/alsa-driver-1.0.9/acore' make: *** [compile] Error 1 And then sudo make install gives me: rm -f /lib/modules/0.0.0/misc/snd*.*o /lib/modules/0.0.0/misc/persist.o /lib/modules/0.0.0/misc/isapnp.o make[1]: Entering directory `/usr/src/alsa/alsa-driver-1.0.9/acore' mkdir -p /lib/modules/0.0.0/misc cp snd-hpet.o snd-page-alloc.o snd-pcm.o snd-timer.o snd.o /lib/modules/0.0.0/misc cp: cannot stat `snd-hpet.o': No such file or directory cp: cannot stat `snd-page-alloc.o': No such file or directory cp: cannot stat `snd-pcm.o': No such file or directory cp: cannot stat `snd-timer.o': No such file or directory cp: cannot stat `snd.o': No such file or directory make[1]: *** [_modinst__] Error 1 make[1]: Leaving directory `/usr/src/alsa/alsa-driver-1.0.9/acore' make: *** [install-modules] Error 1 [SOLUTION] After screwing it all up, someone mentioned why not trying using the packages in Synaptic - so I did. I have reinstalled the following packages and rebooter: -alsa-hda-realtek-ignore-sku-dkms -alsa-modules-2.6.32-25-generic -alsa-source -alsa-utils -linux-backports-modules-alsa-lucid-generic -linux-backports-modules-alsa-lucid-generic-pae -linux-sound-base -(i think i listed them all) After rebooting, the audio worked, both in speakers and headphones. I have no idea which is the package that made my audio work, but it certainly was one of them. [/SOLUTION]

    Read the article

  • Scripting an automated SQLServer 2008 DR move

    - by ItsAMystery
    Hi All We use the built in logshipping in SQLServer to logship to our DR site but once in a month do a DR test which requires us to move back and forth between our Live and BAckup servers. We run multiple (30) databases on the system so manually backing up the final logs and disabling the jobs is too much work and takes too long. I though no problem, I will script it but have run into trouble with it always complaninig that the final logship is too early to apply even though I dont export the final log until putting the database into norecovery mode. Firstly, does any one no a simple and reliable way of doing this? I have lokoed at some 3rd party software (redgate sqlbackup I think it was) but that didnt make it easy in this situation either. What I want to be able to do is basically run a script (a series of stored procedures) to get me to DR and run another to get me back with no dataloss. My scripts are very simplistic at the moment but here they are: 2 servers Primary Paris Secondary ParisT The StartAgentJobAndWait is a script written by someone else (ta) and just checks the jobs have finished or quits it if it never ends. At the moment I am just using a test database called BOB2 but if I can get it working will pass in the database and job names. from PARIS: /* Disable backup job */ exec msdb..sp_update_job @job_name = 'LSBackup_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSCopy_PARIS_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSRestore_PARIS_BOB2', @enabled = 0 exec PARIST.master.dbo.DRStage2 ParisT DRStage2 DECLARE @RetValue varchar (10) EXEC @RetValue = StartAgentJobAndWait LSCopy_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Copy Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' SELECT @RetValue = 0 EXEC @RetValue = StartAgentJobAndWait LSRestore_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Restore Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' exec PARIS.master.dbo.DRStage3 /* Do the last logship and move it to Trumpington */ BACKUP log "BOB2" to disk='c:\drlogshipping\BOB2.bak' with compression, norecovery EXEC xp_cmdshell 'copy c:\drlogshipping \\192.168.7.11\drlogshipping' EXEC PARIST.master.dbo.DRTransferFinish AS BEGIN restore database "BOB2" from disk='c:\drlogshipping\bob2.bak' with recovery

    Read the article

  • MS SQL - Problem running SQL Server Agent Job via service account credentials

    - by molecule
    There are 5 steps in this job. First job is an SSIS Package store, second to fifth are file system jobs. We configured all jobs to use Windows Authentication. Under Run As, we specified a user account which was created under SecurityCredentials and SQL Server AgentProxiesSSIS Package execution. The job runs without any problems with this user account. We then proceeded to configure the job to use a service account instead. Service account was specified under SecurityCredentials and SQL Server AgentProxiesSSIS Package Execution. The job fails with this error. Executed as user: domain\serviceaccount. ....00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 3:37:57 PM Error: 2010-03-09 15:37:57.95 Code: 0xC0016016 Source: Description: Failed to decrypt protected XML node "DTS:Password" with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available. End Error Error: 2010-03-09 15:38:01.19 Code: 0xC0047062 Source: Get CONT_VIEW_LADDER in latest 45days OracleFMDatabase [1] Description: System.Data.OracleClient.OracleException: ORA-01005: null password given; logon denied at System.Data.OracleClient.OracleException.Check(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boo... The package execution fa... The step failed. Based on some research, I then go into MS Visual Studio and Open the project. I change the property of the package security from "EncryptSensitiveWithUserKey" to "DontSaveSensitive" but i still get the above error. I am new to this so any help will be very much appreciated. Thanks in advance

    Read the article

  • Hudson Mercurial checkout throws exception on Debian

    - by Jack
    I'm trying to configure Hudson to checkout my site's sources from Mercurial but it throws an exception. The /var/lib/hudson/jobs/jobname directory does exist, and I can create a workspace directory in there (even after su hudson), but as soon as I run the Hudson job again this directory disappears and the job ends with the same error: java.io.IOException: Cannot run program "hg" (in directory "/var/lib/hudson/jobs/jobname/workspace"): java.io.IOException: error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:460) at hudson.Proc$LocalProc.<init>(Proc.java:192) at hudson.Proc$LocalProc.<init>(Proc.java:164) at hudson.Launcher$LocalLauncher.launch(Launcher.java:639) at hudson.Launcher$ProcStarter.start(Launcher.java:274) at hudson.Launcher$ProcStarter.join(Launcher.java:281) at hudson.plugins.mercurial.MercurialSCM.joinWithPossibleTimeout(MercurialSCM.java:298) at hudson.plugins.mercurial.HgExe.popen(HgExe.java:191) at hudson.plugins.mercurial.HgExe.tip(HgExe.java:171) at hudson.plugins.mercurial.MercurialSCM.calcRevisionsFromBuild(MercurialSCM.java:254) at hudson.scm.SCM._calcRevisionsFromBuild(SCM.java:304) at hudson.model.AbstractProject.calcPollingBaseline(AbstractProject.java:1183) at hudson.model.AbstractProject.checkout(AbstractProject.java:1172) at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:499) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:415) at hudson.model.Run.run(Run.java:1362) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:145) Caused by: java.io.IOException: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) at java.lang.ProcessImpl.start(ProcessImpl.java:65) at java.lang.ProcessBuilder.start(ProcessBuilder.java:453) Running on Debian 6.0.1 I wonder if anyone has ran into this before, and hopefully solved it?

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • Thunderbird alerts when expected email does not arrive

    - by user871199
    I am on Ubuntu 12.04 using Thunderbird as email client. Both are up to date in terms of updates. I have bunch of nightly jobs that do the work and send a status mail. It gets tedious if you keep getting same/similar mails every day so I ended up writing a mail filter rule which causes emails to end up in their respective folders automatically. If things are going ok, I really don't need to read emails. Failure emails are sent to different alias - if the job runs. We recently discovered that one of the job had not run for few days as someone accidentally disabled it. In order to avoid such problems in future, I would like to setup thunderbird in such a way that if I don't get email from given address within given duration, it should alert me. My dream solution is to set up frequency - some jobs do run every 4 hours. Is this possible? Can I setup Thunderbird (preferred) or other email client for reminding me when expected email does not show up. Based on comments and answer I received, here are the reasons why I would like to use Thunderbird. We are already using Thunderbird. It has calender support via plugin, so I suppose someone is already watching time to remind us about the event. May be this another type of event. Additional job is one more failure point, may complicate life if it has to monitor multiple hosts. Additional tools - same thing, one more failure point. Thunderbird can be run across all the platforms we are using - Windows and Ubuntu. It sort of becomes platform independent solution.

    Read the article

  • Calling Excel from PHP 5 through COM fails on Windows 7 when Apache started through Task Planner

    - by Stefan Pantke
    I currently write an application, which controls Excel through COM: The app creates a COM-based Excel instance, opens some XLS files and reads their contents. Scenario I On Windows 7, I start Apache and mySQL using xmapp-control with system administrator rights. All works as expected. The PHP-based controller script interacts with Excel as expected. Scenario II A problem appears, if I start Apache and mySQL as 'background jobs'. Here is how: I created two jobs using Windows 7 Task Planner. One runs apache_start.bat, the other runs mysql_start.bat. Both tasks run as SYSTEM with elevated privileges when Windows 7 boots. Apache and mySQL work as expected. Specifically, Apache serves HTTP request from clients and PHP is able to talk to mySQL. When I call the PHP controller, which calls and interacts with Excel using COM, I do receive an error. The error seems to come from Excel [not COM itself] and reads like this: Excel can't read the XLS-file Excel failed to save the file due to an ill-name worksheet Interestingly, during the first run of the PHP-based controller script, it takes a few seconds to render the error message. Each subsequent run immediately renders the error message. Windows system logs didn't show a single problem report entry. Note, that the PHP program and the Apache instance didn't change - except the way Apache was started. At least the PHP controller script is perfectly able to read the file-system, since it provides the pathes to the XLS-file through scandir() of a certain directory. Concurrency issues can't be the cause of the problem. A single instance of the specific PHP controller interacts with Excel. Question Could someone provide details, why this happens?

    Read the article

  • batch copy files with error log on missing permissions

    - by sc911
    Hi *, I'm searching for a tool to batch-copy files, that should support the following points: copy files from a net-share report any errors show errors only or filter log on errors don't stop on an error also report if a file or a folder could not be copied due to missing permissions if possible it should have a queue where new job can be added while copying I tried the following tools: TerraCopy: takes a lot time to just calculate the time and the size of the job and does not report errors due to missing permissions (it doesn't even add those files to the copy-queue) Karne's replicator: does not report errors due to missing permissions xcopy: does a great job when using the right parameters and piping the output to a file (in the German localization xcopy /k /r /e /i /s /c /h SOURCE TARGET>LOGFILE 2>&1 will do the job. opening the logfile in IE will give you a great monitor). but quing jobs it not possible (ok, you can join them all in a batch-file, but you can not queue jobs while another one is running (hm, thinking of a batch-script that loops through a file with the source-target-config...)) to be continued Which tools do you use? Tell me! Thx sc911

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >