Search Results

Search found 19881 results on 796 pages for 'log analysis'.

Page 35/796 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Willy Rotstein on Analytics and Social Media in Retail

    - by sarah.taylor(at)oracle.com
    Recently I came across a presentation from Dan Zarrella on "The Science of Retweets. (http://www.slideshare.net/HubSpot/the-science-of-retweets-with-dan-zarrella). It is an insightful, fact-based analysis of how tweets propagate and what makes them successful. The analysis is of course very interesting for those of us interested Tweeting. However, what really caught my attention is how well it illustrates, form a very different angle, some of the issues I am discussing with retailers these days. In particular the opportunities that e-commerce and social media open to those retailers with the appetite and vision to tackle the associated analytical challenges. And these challenges are of course not straightforward.   In his presentation Dan introduces the concept of Observability, I haven't had the opportunity to discuss with Dan his specific definition for the term. However, in practical retail terms, I would say that it means that through social media (and other web channels such as search) we can analyze and track processes by measuring Indicators that were not measurable before. The focus is in identifying patterns across a large number of consumers rather than what a particular individual "Likes".   The potential impact for retailers is huge. It opens the opportunity to monitor changes in consumer preference  and plan the business accordingly. And you can do this almost "real time" rather than through infrequent surveys that provide a "rear view" picture of your consumer behaviour. For instance, you could envision identifying when a particular set of fashion styles are breaking out from the pack, and commit a re-buy. Or you could monitor when the preference for a specific mobile device has declined and hence markdowns should be considered; or how demand for a specific ready-made food typically flows across regions and manage the inventory accordingly. Search, blogging, website and store data may need to be considered in identifying these trends. The data volumes involved are huge (check Andrea Morgan's recent post on "Big Data" in retail) but so are the benefits. As Andrea says, for the first time we can start getting insight into "Why" the business is performing in a certain way rather than just reporting on what is happening. And it is not just about the data volumes. Tackling the challenge also calls for integrated planning systems that can bring data and insight into the context of the Decision Making process Buyers, Merchandisers and Supply Chain managers are following. I strongly believe that only when data and process come together you can move from the anecdotal to systematically improving business performance.   I would love to hear your opinions on these trends and where you think Retail is heading to exploit these topics - please email me: [email protected]

    Read the article

  • #SSAS #Tabular Workshop and Community Events in Netherlands and Denmark

    - by Marco Russo (SQLBI)
    Next week I will finally start the roadshow of the SSAS Tabular Workshop, a 2-day seminar about the new BISM Tabular model for Analysis Services that has been introduced in SQL Server 2012. During these roadshows, we always try to arrange some speeches at local community events in the evening - we already defined for Copenhagen, we have some logistic issue in Amsterdam that we're trying to solve. Here is the timetable: Netherlands SSAS Workshop in Amsterdam, NL – April 16-17, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here We're trying to manage a Community event but we still don't have a confirmation, stay tuned        Denmark SSAS Workshop in Copenhagen, DK – April 26-27, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here Community event on April 26, 2012 This event will run in Hellerup, at Microsoft venue All details available here: http://msbip.dk/events/26/msbip-mode-nr-5/ People from Sweden are welcome! Just register to this private group on LinkedIn in order to announce your presence, so we’ll know how many people will attend In community events we’ll deliver two speeches – here are the descriptions: Inside xVelocity (VertiPaq) PowerPivot and BISM Tabular models in Analysis Services share a great columnar-based database engine called xVelocity in-memory analytics engine (VertiPaq). If you want to improve performance and optimize memory used, you have to understand some basic principles about how this engine works, how data is compressed, and how you can design a data model for better optimization. Prepare yourself to change your mind. xVelocity optimization techniques might seem counterintuitive and are absolutely different than OLAP and SQL ones! Choosing between Tabular and Multidimensional You have a new project and you have to make an important decision upfront. Should you use Tabular or Multidimensional? It is not easy to answer, because sometime there is a clear choice, but most of the times both decisions might be correct, at least at the beginning. In this session we’ll help you making an informed decision, correctly evaluating pros and cons of each one according to common scenarios, considering both short-term and long-term consequences of your choice. I hope to meet many people in this first dates. We have many other events coming in May and June, including an online event (for US time zones), and you can also attend our PreCon Day at TechEd US in Orland (PRC06) or TechEd Europe in Amsterdam. I’ll be a good customer for airline companies in the next three months! I’m just sorry that I hadn’t time to write other articles in the last month, but I’m accumulating material that I will need to write down during some flight – stay tuned…

    Read the article

  • AWStats is processing log files but does not display them

    - by Wouter
    I've setup AWStats on my VPS to get some more insight into the traffic coming to my site. As instructed I ran a manual build/update which ran fine: sudo -u www-data ./awstats.pl -config=xxxx.com Create/Update database for config "/etc/awstats/awstats.xxxx.com.conf" by AWStats version 6.9 (build 1.925) From data in log file "/usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Phase 2 : Now process new records (Flush history on disk after 20000 hosts)... Warning: awstats has detected that some hosts names were already resolved in your logfile /usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |. If DNS lookup was already made by the logger (web server), you should change your setup DNSLookup=1 into DNSLookup=0 to increase awstats speed. Jumped lines in file: 0 Parsed lines in file: 814 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 814 new qualified records. It also produced the file in the DatDir: /var/lib/awstats/awstats052010.xxxx.com.txt which contains what I would expect. BUT when I visit: xxxx.com/awstats/awstats.pl it tells me Last Update: Never updated (See 'Build/Update' on awstats_setup.html page) and the rest of the page is blank. I'm pretty sure I set it up correctly but now I cannot figure out why this is happening. Hopefully someone smarter then me can help me. Thank you in advanced.

    Read the article

  • GhettoVCB.sh log is wrong

    - by Michael
    2010-02-25 16:03:02 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 2 2010-02-25 16:03:02 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2010-02-25 16:03:02 -- info: ============================== ghettoVCB LOG START ============================== 2010-02-25 16:03:02 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-02-25 16:03:02 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-02-25 16:03:02 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-02-25 16:03:02 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/nfs_storage_backup/vm1 2010-02-25 16:03:02 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-02-25 16:03:02 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 2 2010-02-25 16:03:02 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-02-25 16:03:02 -- info: CONFIG - DISK_BACKUP_FORMAT = thin 2010-02-25 16:03:02 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-02-25 16:03:02 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-02-25 16:03:02 -- info: CONFIG - LOG_LEVEL = info 2010-02-25 16:03:02 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log 2010-02-25 16:03:02 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-02-25 16:03:02 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-02-25 16:03:02 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-02-25 16:03:02 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-02-25 16:03:02 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2010-02-25 16:03:02 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-02-25 16:03:02 -- info: CONFIG - LOG_LEVEL = info 2010-02-25 16:03:02 -- info: CONFIG - BACKUP_LOG_OUTPUT = /tmp/ghettoVCB.log 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-02-25 16:03:02 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-02-25 16:03:02 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all 2010-02-25 16:03:13 -- info: Initiate backup for VM1 2010-02-25 16:03:13 -- info: Initiate backup for VM1 2010-02-25 16:03:13 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-02-25" for VM1 2010-02-25 16:03:13 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-02-25" for VM1 Failed to clone disk : The file already exists (39). Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/datastore1/machine/VM1.vmdk'... 2010-02-25 16:04:16 -- info: Removing snapshot from VM1 ... Destination disk format: VMFS thin-provisioned Cloning disk '/vmfs/volumes/datastore1/machine/VM1.vmdk'... How can I fix this issue, the backup is working, but the log shows something like 2 back-up's in the exact time?

    Read the article

  • Linux installation analysis

    - by blunders
    "Ending company IT Admin relationship" has a good checklist for taking over an existing IT system, but I'm wondering as it relates to Linux: What is the most effective way to assess the scope of existing custom configurations, installs, scripts, etc done? Is there any software that will check if the kernel, system files, etc mirror the default files for the version installed? At this point I don't know what distro of Linux the server (though using Netcraft I do know the server appears to be Linux) -- so it's possible without knowing that information that this would be a hard question to answer.

    Read the article

  • What causes this logrotate behavior in Puppet?

    - by ujjain
    After running logrotate, Puppet starts writing it's logs into /var/log/puppet/masterhttp.log-20130616. How come it doesn't keep logging in /var/log/puppet/masterhttp.log? It seems normal behavior is renaming the original log-file and start with a clean fresh log-file to start writing in that log file, keeping the other file as a log-archive. [root@puppetmaster puppet]# ls -al total 97520 drwxr-x---. 2 puppet puppet 4096 Jun 16 03:24 . drwxr-xr-x. 12 root root 4096 Jul 1 09:11 .. -rw-r--r--. 1 puppet puppet 0 Jun 16 03:24 masterhttp.log -rw-rw----. 1 puppet puppet 99847187 Jul 1 09:19 masterhttp.log-20130616 [root@puppetmaster init.d]# cat /etc/logrotate.d/puppet /var/log/puppet/*log { missingok notifempty create 0644 puppet puppet sharedscripts postrotate pkill -USR2 -u puppet -f /usr/sbin/puppetmasterd || true [ -e /etc/init.d/puppet ] && /etc/init.d/puppet reload > /dev/null 2>&1 || true endscript } [root@puppetmaster init.d]# How can I make Puppet log to /var/log/puppet/masterhttp.log and not to /var/log/puppet/masterhttp.log-20130616? Even restarting puppet doesn't make it log into /var/log/puppet/masterhttp.log instead of /var/log/puppet/masterhttp.log-20130616.

    Read the article

  • MySql Data Loss - post mortem analysis - RackSpace Cloud Server

    - by marfarma
    After a recent 'emergency migration' of a RS cloud server, the mysql databases on our server snapshot image proved to be days out of date from the backup date. And yet files that were uploaded through the impacted webapp had been written to the file system. Related metadata that was written to the database was lost, but the files themselves were backed-up. Once I was able to manually access the mysql data files before the mysql server started (server was configured to start mysql on boot), I was able to see that the update time for ib_logfile1, ib_logfile0 and ibdata1 was days old. As with this poster, mysql data loss after server crash, it's as if some caching controller had told the OS / mysql server that it had committed data that was still in cache, and it was lost instead of flushed. I can't quite wrap my head around how the uploaded files got written but the database data did not. I would have thought that any cache would have flushed system wide, rather than process by process. Any suggestions as to how this might have happened?

    Read the article

  • FAT filesystem analysis tool

    - by Andy
    I have a dump a FAT file system. Is there a windows tool I can use to analyse it, including: Provide basic information (sector size etc.) Validate the file system, basic corruption checking Allow the files and directory structure to be viewed and possibly edited (i.e mounting as a windows partition) Thanks, Andy

    Read the article

  • Windows File System Analysis

    - by bouvierr
    I am looking for a FREE tool to perform analyses on the NTFS file system of my Windows 7 PC. I want to easily see the amount of data distributed throught out the entire file system. The following applications seem very good, but they are not free and probably overkill for my requirements: FolderSizes 5 MailMeter Windows File System Reporting Tool I am aware that some applications (like Folder Size 2.5) can add a column in Windows Explorer to show the size of each folder, but I am looking for something more like a reporting tool. Thank you for your suggestions.

    Read the article

  • Forensic Analysis of the OOM-Killer

    - by Oddthinking
    Ubuntu's Out-Of-Memory Killer wreaked havoc on my server, quietly assassinating my applications, sendmail, apache and others. I've managed to learn what the OOM Killer is, and about its "badness" rules. While my machine is small, my applications are even smaller, and typically only half of my physical memory is in use, let alone swap-space, so I was surprised. I am trying to work out the culprit, but I don't know how to read the OOM-Killer logs. Can anyone please point me to a tutorial on how to read the data in the logs (what are ve, free and gen?), or help me parse these logs? Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 1, exc 2326 0 goal 2326 0... Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 1 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 1, exc 2326 0 red 61795 745 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 2, exc 122 0 goal 383 0... Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 1 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 2, exc 383 0 red 61795 745 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 2 Apr 20 20:03:27 EL135 kernel: OOM killed process watchdog (pid=14490, ve=13516) exited, free=43104 gen=24501. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=4457, ve=13516) exited, free=43104 gen=24502. Apr 20 20:03:27 EL135 kernel: OOM killed process ntpd (pid=10816, ve=13516) exited, free=43104 gen=24503. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=27401, ve=13516) exited, free=43104 gen=24504. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=29009, ve=13516) exited, free=43104 gen=24505. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=10557, ve=13516) exited, free=49552 gen=24506. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=24983, ve=13516) exited, free=53117 gen=24507. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=29129, ve=13516) exited, free=68493 gen=24508. Apr 20 20:03:27 EL135 kernel: OOM killed process sendmail-mta (pid=941, ve=13516) exited, free=68803 gen=24509. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=12418, ve=13516) exited, free=69330 gen=24510. Apr 20 20:03:27 EL135 kernel: OOM killed process python (pid=22953, ve=13516) exited, free=72275 gen=24511. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=6624, ve=13516) exited, free=76398 gen=24512. Apr 20 20:03:27 EL135 kernel: OOM killed process python (pid=23317, ve=13516) exited, free=94285 gen=24513. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=29030, ve=13516) exited, free=95339 gen=24514. Apr 20 20:03:28 EL135 kernel: OOM killed process apache2 (pid=20583, ve=13516) exited, free=101663 gen=24515. Apr 20 20:03:28 EL135 kernel: OOM killed process logger (pid=12894, ve=13516) exited, free=101694 gen=24516. Apr 20 20:03:28 EL135 kernel: OOM killed process bash (pid=21119, ve=13516) exited, free=101849 gen=24517. Apr 20 20:03:28 EL135 kernel: OOM killed process atd (pid=991, ve=13516) exited, free=101880 gen=24518. Apr 20 20:03:28 EL135 kernel: OOM killed process apache2 (pid=14649, ve=13516) exited, free=102748 gen=24519. Apr 20 20:03:28 EL135 kernel: OOM killed process grep (pid=21375, ve=13516) exited, free=132167 gen=24520. Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 4, exc 4215 0 goal 4826 0... Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): task ede29370, thg df98b880, sig 1 Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 4, exc 4826 0 red 189481 331 Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): task ede29370, thg df98b880, sig 2 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 5, exc 3564 0 goal 3564 0... Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): task c6c90110, thg cdb1a100, sig 1 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 5, exc 3564 0 red 189481 331 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): task c6c90110, thg cdb1a100, sig 2 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 6, exc 8071 0 goal 8071 0... Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): task d7294050, thg c03f42c0, sig 1 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 6, exc 8071 0 red 189481 331 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): task d7294050, thg c03f42c0, sig 2 Watchdog is a watchdog task, that was idle; nothing in the logs to suggest it had done anything for days. Its job is to restart one of the applications if it dies, so a bit ironic that it is the first to get killed. Tail was monitoring a few logs files. Unlikely to be consuming memory madly. The apache web-server only serves pages to a little old lady who only uses it to get to church on Sundays a couple of developers who were in bed asleep, and hadn't visited a page on the site for a few weeks. The only traffic it might have had is from the port-scanners; all the content is password-protected and not linked from anywhere, so no spiders are interested. Python is running two separate custom applications. Nothing in the logs to suggest they weren't humming along as normal. One of them was a relatively recent implementation, which makes suspect #1. It doesn't have any data-structures of any significance, and normally uses only about 8% of the total physical RAW. It hasn't misbehaved since. The grep is suspect #2, and the one I want to be guilty, because it was a once-off command. The command (which piped the output of a grep -r to another grep) had been started at least 30 minutes earlier, and the fact it was still running is suspicious. However, I wouldn't have thought grep would ever use a significant amount of memory. It took a while for the OOM killer to get to it, which suggests it wasn't going mad, but the OOM killer stopped once it was killed, suggesting it may have been a memory-hog that finally satisfied the OOM killer's blood-lust.

    Read the article

  • Is it possible to re-cab an Administrative Install Point?

    - by Nathaniel Bannister
    We have Acrobat 8 Pro at work, and our media was painfully out of date. Rather than install all of the machines at 8.0.0 and then do the 6 or 7 consecutive reboots adobe expects you to be ok with I decided I'd integrate the .msp files into the installer. After reading up on it, I figured out the exact patch order that adobe required, extracted my cd to an Administrative install point, and ran the patches against it: msiexec /a AcroPro.msi /p AcrobatUpd810_efgj_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd811_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd812_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd813_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd816_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd817_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd820_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd822_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd823_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd825_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" msiexec /a AcroPro.msi /p AcrobatUpd826_all_incr.msp TARGETDIR="C:\Acrobat8" /log "output.log" Now I have a AIP that is fully patched to 8.2.6 (Tested working prior to attempting to CAB it), but is absolutely huge (1.2gb) what I would like to do is take the folders within the AIP and put them back into a cab file for the sake of convenience in transferring the files around. I tried the command: cscript "C:\Program Files\Microsoft SDKs\Windows\v7.0\Samples\sysmgmt\msi\scripts\WiMakCab.vbs" AcroPro.msi Data1 /L /C /S Per the guide I was using, while this did produce the cab file I Wanted, however the resulting MSI fails to install with an error 2602: It's been a while since I've done something like this, and it's probably a glaring oversight on my part, but any insight would be much appreciated.

    Read the article

  • Confusion about TCP packet analysis terms

    - by Berkay
    I'm analyzing our network and have some confusion about the terms: this is the 2-packet output from source to destination. from these i have to get some features as describe, pls make me clear... packets with at least a bytes of TCP data payload: it seems tcp.len0; The minimum segment size (confusion is headers are included or or not) The average segment size observed during the lifetime of the connection, the definition: is calculated as the value reported in the actual data bytes divided by the actual data pkts reported. Total bytes in IP packets, should be ip_len value. Total bytes in (Ethernet) The total number of bytes sent probably related to frame.len and frame.cap_len these two terms are describes as, also make me clear about these two terms. frame.cap_len: Frame length stored into the capture file frame.len: Frame length on the wire

    Read the article

  • Exchange 2010: Find Move Request Log after move request completes

    - by gravyface
    EDIT: significantly changed my question here to streamline it a bit. I've gone ahead and used 100 as my corrupted item count and ran it from the Exchange Shell. So the trail of tears continues with my SBS 2003 to 2011 migration: all the mailboxes have moved mailbox store from OLDSERVER to NEWSERVER, with the Local Move Requests completing successfully, except for one. What I'd like to do now is review the previous move request log files: when they were in progress, I could right-click Properties Log View Log File, but now that they're completed, that's not available. Nor can I use: Get-MoveRequestStatistics <user> -includereport | fl MoveReport ...as the move request has now completed and it errors out with "couldn't find a move request that corresponds...". Basically what I'd like to do is present the list of baditems to the user so that they're aware of what items didn't come across and if anything important was lost, be able to check their current OST, an archive.pst, etc. to recover it if possible. If this all needs to be wrapped up in a batch Exchange power shell command to pipe the output to log files on disk somewhere, I'm all ears, and would appreciate it for the next migration we do.

    Read the article

  • Windows Disk I/O Analysis

    - by Jonathon
    It appears that we are having a problem with the disk i/o speed on our Windows 2003 Enterprise Edition server (64-bit). As we were initializing a database that created two 1G tablespaces on 3 different machines, it became obvious that the two smaller machines (each 32-bit Windows 2003 Standard Edition with less RAM) killed the larger machine when creating the files. The larger machine took 10x as long to create the tablespaces than did the other machines. Now, I am left wondering how that could be. What programs or scripts would you guys recommend for tracking down the I/O problem? I think the issue may be with the controller card (all boxes are hardware RAID 10, but have different controller cards), but I would like to check the actual disk I/O speed as well, so I have some hard numbers to work with. Any help would be appreciated.

    Read the article

  • Missing access log for virtual host on Plesk

    - by Cummander Checkov
    For some reason i don't understand, after creating a new virtual host / domain in Plesk a few months back, i cannot seem to find the access log. I noticed this when running /usr/local/psa/admin/sbin/statistics The host in question is being scanned Main HTML page is 'awstats.<hostname_masked>-http.html'. Create/Update database for config "/opt/psa/etc/awstats/awstats.<hostname_masked>.com-https.conf" by AWStats version 6.95 (build 1.943) From data in log file "-"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Jumped lines in file: 0 Parsed lines in file: 0 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 0 new qualified records. So basically no access logs have been parsed/found. I then went on to check if i could find the log myself. I looked in /var/www/vhosts/<hostname_masked>.com/statistics/logs but all i find is error_log Does anybody know what is wrong here and perhaps how i could fix this? Note: in the <hostname_masked>.com/conf/ folder i keep a custom vhost.conf file, which however contains only some rewrite conditions plus a directory statement that contains php_admin_flag and php_admin_value settings. None of them are related to logging though.

    Read the article

  • Dir all files to output.log, including message "File Not Found"

    - by user316687
    I'm trying to dir a bunch of files with dos command: dir My dir.bat file: dir E:\documentos\57\Asiento\01\"Asiento 3 Modificacion de Estatuto.doc" dir E:\documentos\134\Asiento\01\"File Does Not Exist.doc" dir E:\documentos\55\Asiento\01\"Asiento 5 Padron de Afiliados Segunda Entrega.doc" The second one doesn't exist. Then, when running my bat: C:\myuser>E:\dir.bat > output.log I open output.log and don't find any message about the file that was not found. Output.log : E:\documentos>dir E:\documentos\57\Asiento\01\"Asiento 3 Modificacion de Estatuto.doc" Volume in drive E is New Volume Volume Serial Number is 0027-F7F6 Directory of E:\documentos\57\Asiento\01 20/12/2005 06:41 p.m. 40,960 Asiento 3 Modificacion de Estatuto.doc 1 File(s) 40,960 bytes 0 Dir(s) 17,053,155,328 bytes free E:\documentos>dir E:\documentos\134\Asiento\01\"File Does Not Exist.doc" Volume in drive E is New Volume Volume Serial Number is 0027-F7F6 Directory of E:\documentos\134\Asiento\01 E:\documentos>dir E:\documentos\55\Asiento\01\"Asiento 5 Padron de Afiliados Segunda Entrega.doc" Volume in drive E is New Volume Volume Serial Number is 0027-F7F6 Directory of E:\documentos\55\Asiento\01 08/08/2007 08:33 a.m. 40,960 Asiento 5 Padron de Afiliados Segunda Entrega.doc 1 File(s) 40,960 bytes 0 Dir(s) 17,053,151,232 bytes free Is there any way that output.log shows "File Not Found" message?

    Read the article

  • Empty rewrite.log on Windows, RewriteLogLevel is in httpd.conf

    - by ripper234
    I am using mod_rewrite on Apache 2.2, Windows 7, and it is working ... except I don't see any logging information. I added these lines to the end of my httpd.conf: RewriteLog "c:\wamp\logs\rewrite.log" RewriteLogLevel 9 The log file is created when Apache starts (so it's not a permission problem), but it remains empty. I thought there might be a conflicting RewriteLogLevel statement somewhere, but I checked and there isn't. What else could cause this? Could this be caused by Apache not flushing the log file? (I closed it by hitting CTRL-C on the httpd.exe command ... this caused the access logs to be flushed to disk, but still nothing in rewrite.log) My (partial) httpd-vhosts.conf: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName my.domain.com DocumentRoot c:\wamp\www\folder <Directory c:\wamp\www\folder> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule . everything-redirects-to-this.php [L] </IfModule> </Directory> </VirtualHost>

    Read the article

  • Walk me through the Linux log files (please)

    - by Andy
    Hey all, I just tried loading a 2MB file in gedit and it silently died on me. I was wondering if anything might appear in a log file that might help me diagnose this: I checked syslog and found out it segfaulted. While doing this I realised that I don't really know anything about how logging is organised on *nix machines. All I know at the mo is Logs are typically stored in /var/log/... is there anywhere else that I should know about? I'm familiar with application specific logs, such as apache's. I understand that dmesg is the bootup log, and syslog is a general system log... is that right? So would someone mind taking me through the most useful logs? Are the two logs I mention in the final point the only general logs? And what are the funky numbers at the start of lines in dmesg? Seconds since startup? Please include anything in your answers that you think would improve my understanding here and help me track down anomalies! TIA Andy

    Read the article

  • Processing a log to fix a malformed IP address ?.?.?.x

    - by skymook
    I would like to replace the first character 'x' with the number '7' on every line of a log file using a shell script. Example of the log file: 216.129.119.x [01/Mar/2010:00:25:20 +0100] "GET /etc/.... 74.131.77.x [01/Mar/2010:00:25:37 +0100] "GET /etc/.... 222.168.17.x [01/Mar/2010:00:27:10 +0100] "GET /etc/.... My humble beginnings... #!/bin/bash echo Starting script... cd /Users/me/logs/ gzip -d /Users/me/logs/access.log.gz echo Files unzipped... echo I'm totally lost here to process the log file and save it back to hd... exit 0 Why is the log file IP malformed like this? My web provider (1and1) has decide not to store IP address, so they have replaced the last number with the character 'x'. They told me it was a new requirement by 'law'. I personally think that is bs, but that would take us off topic. I want to process these log files with AWstats, so I need an IP address that is not malformed. I want to replace the x with a 7, like so: 216.129.119.7 [01/Mar/2010:00:25:20 +0100] "GET /etc/.... 74.131.77.7 [01/Mar/2010:00:25:37 +0100] "GET /etc/.... 222.168.17.7 [01/Mar/2010:00:27:10 +0100] "GET /etc/.... Not perfect I know, but least I can process the files, and I can still gain a lot of useful information like country, number of visitors, etc. The log files are 200MB each, so I thought that a shell script is the way to go because I can do that rapidly on my Macbook Pro locally. Unfortunately, I know very little about shell scripting, and my javascript skills are not going to cut it this time. I appreciate your help.

    Read the article

  • Merging k sorted linked lists - analysis

    - by Kotti
    Hi! I am thinking about different solutions for one problem. Assume we have K sorted linked lists and we are merging them into one. All these lists together have N elements. The well known solution is to use priority queue and pop / push first elements from every lists and I can understand why it takes O(N log K) time. But let's take a look at another approach. Suppose we have some MERGE_LISTS(LIST1, LIST2) procedure, that merges two sorted lists and it would take O(T1 + T2) time, where T1 and T2 stand for LIST1 and LIST2 sizes. What we do now generally means pairing these lists and merging them pair-by-pair (if the number is odd, last list, for example, could be ignored at first steps). This generally means we have to make the following "tree" of merge operations: N1, N2, N3... stand for LIST1, LIST2, LIST3 sizes O(N1 + N2) + O(N3 + N4) + O(N5 + N6) + ... O(N1 + N2 + N3 + N4) + O(N5 + N6 + N7 + N8) + ... O(N1 + N2 + N3 + N4 + .... + NK) It looks obvious that there will be log(K) of these rows, each of them implementing O(N) operations, so time for MERGE(LIST1, LIST2, ... , LISTK) operation would actually equal O(N log K). My friend told me (two days ago) it would take O(K N) time. So, the question is - did I f%ck up somewhere or is he actually wrong about this? And if I am right, why doesn't this 'divide&conquer' approach can't be used instead of priority queue approach?

    Read the article

  • apache error log for rewrite rule

    - by Imran Naqvi
    Hi, i am getting following error in apache log File does not exist: D:/wamp/www/script/products, referer: http://localhost/script/products/category/product-123.html whenever following url localhost/script/products/category/product-123.html is parsed through this rewrite rule RewriteRule ^products/([~A-Za-z0-9-"]+)/([~A-Za-z0-9-".]+).html$ index.php?page_type=products&prod=$2 [L]. The script and rule is working fine but i am getting that error in apache error log. I have activated RewriteLog, but nothing is showing up in the rewrite.log file. Its empty. Please help and thanks in advance.

    Read the article

  • Log Location Url Responses of 301 redirects from IIS

    - by James Lawruk
    Is there a way to log 301 redirects returned by IIS with the (1) request Url and the (2) location Url of the response? Something like this: Url, Location /about-us, /about /old-page, /new-page The IIS logs contain the Request Url and the status code (301), but not the location Url of the response. Ideally there would be an additional field in the IIS Log called Location that would be populated when IIS responded with a 301. In my case the source of the redirect could be ISAPI Rewrite Rules, ASP.NET applications, Cold Fusion applications, or IIS itself. Perhaps there is a way to log IIS response data? Thanks for your help.

    Read the article

  • Turning a log file into a sort of circular buffer

    - by pachanga
    Folks, is there a *nix solution which would make the log file act as a circular buffer? For example, I'd like log files to store maximum 1Gb of data and discard the older entries once the limit is reached. Is it possible at all? I believe in order to achieve that a log file should be turned into some sort of special device... P.S. I'm aware of misc logrotating tools but this is not what I need. Logrotating requires lots of IO, happens usually once a day while I need a "runtime" solution.

    Read the article

  • Could not retrieve backup settings for primary ID in Log shipping

    - by user1723139
    I am doing log shipping between two Amazon EC2 instances running Windows Server 2008 R2 with SQL Server 2008 R2 standard edition. Both the instances are in the same domain and I can access the shared folders between the instances. The SQL server service account, agent service account are all running under a domain account. When I activate log shipping (with stand by mode restore in secondary server), the initial backup gets restored on the secondary. After that the backup operation is getting failed and i get the following error message: *** Error: Could not retrieve backup settings for primary ID 'xxxxxx-xxxx-xxxx-xxxx-4d772cd7337e'.(Microsoft.SqlServer.Management.LogShipping) *** *** Error: Failed to connect to server IP-0A7653F2.(Microsoft.SqlServer.ConnectionInfo) *** ****** Error: A network-related or instance-specific error occurred while establishing a connection to SQL Server.******** **The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)(.Net SqlClient Data Provider) *** **----- END OF TRANSACTION LOG BACKUP -----**** Any ideas?

    Read the article

  • Log of cron actions on OS X

    - by Doug Harris
    Does the cron which comes with OS X log its actions anywhere? I'm not looking for output of any particular cron job, but rather log of what cron is doing. On a couple linux machines I've checked, there's /var/log/cron which has contents like: Apr 26 11:00:01 localhost crond[27755]: (root) CMD (/root/bin/mysql-backup) Apr 26 11:01:01 localhost crond[27892]: (root) CMD (run-parts /etc/cron.hourly) Apr 26 11:07:01 localhost crond[28138]: (root) CMD (/usr/local/bin/python /home/ user1/scripts/pythonscript.py) Apr 26 11:18:18 localhost crontab[28921]: (user2) LIST (user2) Apr 26 11:18:22 localhost crontab[28929]: (user2) BEGIN EDIT (user2) Apr 26 11:18:59 localhost crontab[28929]: (user2) REPLACE (user2) This shows when jobs ran, when users viewed or edited crontabs, etc. This stuff is nowhere that I've found on my Snow Leopard machine.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >