Search Results

Search found 19256 results on 771 pages for 'boost log'.

Page 178/771 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • Entering Safe Mode problem

    - by NikolaysGS
    Hi! When i start my pc i get following message: The Logon User Interface DLL RtlGina2.dll failed to load I found solution here But i ca not log into my pc, neither in safe/normal mode. It is interesting that first time i succeed loging into safe mode, but change msconfig boot.ini to safemode with networking where i can not log in. And now every time no matter what i am choosing from F8 menu, i enter in safe mode with networking(where i get following error). So is it any way to log in back into "pure" safe mode. I am not sure that the question it`s clear enough ;-(

    Read the article

  • VIDEO: Improved user experience of PeopleTools 8.50 a hit with customer

    - by PeopleTools Strategy Team
    New and upgraded features in PeopleTools 8.50 really help boost productivity, says Oracle customer Dennis Mesler, of Boise, Inc. From improved navigational flows to enhanced grids to new features such as type-ahead or auto-suggest, users can expect to save time and training with PeopleTools 8.50. To hear more about this customer's opinion on the user experience of PeopleTools 8.50, watch his video at HERE

    Read the article

  • How to setup heartbeat for IP fail over on SSH failure

    - by Tony
    I wonder if anyone can help me, I am trying to setup heartbeat on a redhat 5 to failover an IP address when ssh stops responding on a server. So basically you ssh to a VIP and then get put through which ever server has the floating ip. 192.168.0.100 | | /------------------------\ | /------------------------\ | Server 01 | | | Server 02 | | eth0 - 192.168.0.1 |-----/ | eth0 - 192.168.0.2 | | eth0:0 - 192.168.0.100 | | eth0:0 - down | \------------------------/ \------------------------/ if ssh stops responding i want eth0:0 to be brought up on the second machine to allow ssh connections to carry on being served. I have tried to follow some documents I have found online so here is my current configuration: ha.cf bcast eth0 keepalive 2 warntime 10 deadtime 30 initdead 120 udpport 694 auto_failback off node vm-bal01 node vm-bal02 debugfile /var/log/ha-debug logfile /var/log/ha-log authkeys auth 1 1 sha1 sshhhsecret1234 haresources server01 192.168.0.100/24/eth0:0/192.168.0.255 Hope someone can help as this is driving me nuts...

    Read the article

  • Backing Up Transaction Logs to Tape?

    - by David Stein
    I'm about to put my database in Full Recovery Model and start taking transaction log backups. I am taking a full nightly backup to another server and later in the evening this file and many others are backed up to tape. My question is this. I will take hourly (or more if necessary) t-log backups and store them on the other server as well. However, if my full backups are passing DBCC and integrity checks, do I need to put my T-Logs on tape? If someone wants point in time recovery to yesterday at 2pm, I would need the previous full backup and the transaction logs. However, other than that case, if I know my full back ups are good, is there value in keeping the previous day's transaction log backups?

    Read the article

  • logrotate by size outside the daily schedule

    - by Josh Smeaton
    We have a couple of applications that generate huge log files. It's not enough to rotate those logs daily, so I created the following logrotate conf: /var/log/ourapp/*log { compress copytruncate missingok size 200M rotate 10 } The idea is that we can keep 2GB of logs for this one application, no matter how quickly those files are filling up. The problem, though, is that logrotate only runs once daily. AFAIK, when logrotate kicks off at 4am, it will check to see that the size is at least 200M and rotate it if so. Ideally logrotate would run every minute, check the size, and rotate if the size is greater. Is there a standard way for rotating based on size outside of the daily cron schedule?

    Read the article

  • Guaranteeing ACID properties for InnoDB databases.

    - by plinehan
    What steps must one take to ensure that an otherwise defaultly-configured InnoDB server is truly ACID compliant? The InnoDB configuration page mentions that the hardware itself must be configured to honor fsync calls, i.e. disable any write-back caches. This page mentions some other concerns, but may be conflating the binary log and the InnoDB log, and may be a bit out of date regarding default settings for MySQL 5.x. Upon reading the binary log document page it would seem that the "sync_binlog=1" setting is not required for ACID properties in general, only for ACID properties vis a vis point-in-time recovery and replication. So, is disabling write-back disk caching sufficient, or are there other settings that must be tweaked?

    Read the article

  • Tracking state of a one time event on a big website

    - by Mattis
    Assume a website with 250 million active users. I add a new feature to the website. Once a user visits I want to use a short tutorial to teach them how to use said feature. I only want them to complete the tutorial once (or actively click it away). What is the smart way to code the verification check for this? How do I track the progress in the database? Having a separate table with like NewTutorial_completed = 1 for user_id = 21312315 would just snowball. It also feels intuitively bad to check for every one-time event for every user on every page view. While writing the question I got one idea, to have a separate event log that is checked periodically for any new action the user need to see or perform. I push events to this log and once they are completed they are removed from the log. No need to store NewTutorial_completed = 1-type variables this way. I am sure this is a common problem. I would appreciate any input on what best practice is.

    Read the article

  • Scale a game object to Bounds

    - by Spikeh
    I'm trying to scale a lot of dynamically created game objects in Unity3D to the bounds of a sphere collider, based on the size of their current mesh. Each object has a different scale and mesh size. Some are bigger than the AABB of the collider, and some are smaller. Here's the script I've written so far: private void ScaleToCollider(GameObject objectToScale, SphereCollider sphere) { var currentScale = objectToScale.transform.localScale; var currentSize = objectToScale.GetMeshHierarchyBounds().size; var targetSize = (sphere.radius * 2); var newScale = new Vector3 { x = targetSize * currentScale.x / currentSize.x, y = targetSize * currentScale.y / currentSize.y, z = targetSize * currentScale.z / currentSize.z }; Debug.Log("{0} Current scale: {1}, targetSize: {2}, currentSize: {3}, newScale: {4}, currentScale.x: {5}, currentSize.x: {6}", objectToScale.name, currentScale, targetSize, currentSize, newScale, currentScale.x, currentSize.x); //DoorDevice_meshBase Current scale: (0.1, 4.0, 3.0), targetSize: 5, currentSize: (2.9, 4.0, 1.1), newScale: (0.2, 5.0, 13.4), currentScale.x: 0.125, currentSize.x: 2.869114 //RedControlPanelForAirlock_meshBase Current scale: (1.0, 1.0, 1.0), targetSize: 5, currentSize: (0.0, 0.3, 0.2), newScale: (147.1, 16.7, 25.0), currentScale.x: 1, currentSize.x: 0.03400017 objectToScale.transform.localScale = newScale; } And the supporting extension method: public static Bounds GetMeshHierarchyBounds(this GameObject go) { var bounds = new Bounds(); // Not used, but a struct needs to be instantiated if (go.renderer != null) { bounds = go.renderer.bounds; // Make sure the parent is included Debug.Log("Found parent bounds: " + bounds); //bounds.Encapsulate(go.renderer.bounds); } foreach (var c in go.GetComponentsInChildren<MeshRenderer>()) { Debug.Log("Found {0} bounds are {1}", c.name, c.bounds); if (bounds.size == Vector3.zero) { bounds = c.bounds; } else { bounds.Encapsulate(c.bounds); } } return bounds; } After the re-scale, there doesn't seem to be any consistency to the results - some objects with completely uniform scales (x,y,z) seem to resize correctly, but others don't :| Its one of those things I've been trying to fix for so long I've lost all grasp on any of the logic :| Any help would be appreciated!

    Read the article

  • ALSA samples capture: cannot open device

    - by Randagio
    I'm quite new to Linux (Lubuntu 12.04 for sake of precision) and ALSA programming at all. I'm trying to write a C program to capture audio from internal PC microphone for processing it. So as first step I google a bit and I found this article for capturing audio samples A tutorial on using the ALSA Audio API but when I compile it and execute it with: ./capture "default" or ./capture "hw:0,0" and all the possible variants on theme it always raises the error: cannot open device hw:0,0 (no such file or directory). So the issue is: what is the name of the mic audio device to pass as parameter to record the audio from mic ? The mic is working ok because the Sound Recorder program records sounds perfectly and I can playback them. The output of the aplay -l is the following : **** List of PLAYBACK Hardware Devices **** card 0: I82801DBICH4 [Intel 82801DB-ICH4], device 0: Intel ICH [Intel 82801DB-ICH4] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: I82801DBICH4 [Intel 82801DB-ICH4], device 4: Intel ICH - IEC958 [Intel 82801DB-ICH4 - IEC958] Subdevices: 1/1 Subdevice #0: subdevice #0 and this is the amixer output (cut) Simple mixer control 'Master',0 Capabilities: pvolume pswitch penum Playback channels: Front Left - Front Right Limits: Playback 0 - 31 Mono: Front Left: Playback 31 [100%] [0.00dB] [on] Front Right: Playback 31 [100%] [0.00dB] [on] Simple mixer control 'Master Mono',0 Capabilities: pvolume pvolume-joined pswitch pswitch-joined penum Playback channels: Mono Limits: Playback 0 - 31 Mono: Playback 4 [13%] [-40.50dB] [on] Simple mixer control 'PCM',0 Capabilities: pvolume pswitch penum Playback channels: Front Left - Front Right Limits: Playback 0 - 31 Mono: Front Left: Playback 31 [100%] [12.00dB] [on] Front Right: Playback 31 [100%] [12.00dB] [on] Simple mixer control 'CD',0 Capabilities: pvolume pswitch cswitch cswitch-exclusive penum Capture exclusive group: 0 Playback channels: Front Left - Front Right Capture channels: Front Left - Front Right Limits: Playback 0 - 31 Front Left: Playback 0 [0%] [-34.50dB] [off] Capture [off] Front Right: Playback 0 [0%] [-34.50dB] [off] Capture [off] Simple mixer control 'Mic',0 Capabilities: pvolume pvolume-joined pswitch pswitch-joined cswitch cswitch-exclusive penum Capture exclusive group: 0 Playback channels: Mono Capture channels: Front Left - Front Right Limits: Playback 0 - 31 Mono: Playback 22 [71%] [-1.50dB] [on] Front Left: Capture [on] Front Right: Capture [on] Simple mixer control 'Mic Boost (+20dB)',0 Capabilities: pswitch pswitch-joined penum Playback channels: Mono Mono: Playback [off] Simple mixer control 'Mic Select',0 Capabilities: enum Items: 'Mic1' 'Mic2' Item0: 'Mic1' Simple mixer control 'Stereo Mic',0 Capabilities: pswitch pswitch-joined penum Playback channels: Mono Mono: Playback [off] so for aplay it seems I have no recording device, but for amixer I've got the mic, a mic boost and mic stereo as well with all those gorgeous stuffs on their place !!. If so, how could my Sound Recorder record the audio without any problem at all ?!?! For sure I'm giving the wrong device name to the command line for capturing audio but I'm loosing the hope for finding the correct one ! Please help....before I tear my hair out !!!

    Read the article

  • Syslog permissions

    - by Niels Kristian
    I'm using the $InputFile facility in rsyslog to monitor various log files scattered around my ubuntu 12.04 server. E.g. nginx, unicorn, rails, postgres, cron etc. Now my problem is, that some of these log files are created with -rw-r----- right, so rsyslog doesn't have read rights. Since I install most of the programs using apt-get, and therefore didn't change anything from default. So, in other words, I would like not to modify every singe log file / daemon to have the right permissions, if I instead could give syslog read access to all of them at once. But the question is - can I do that, and is it the "right thing to do"?

    Read the article

  • DokuWiki Segmentation Fault On Radius Auth

    - by mrduclaw
    I'm running x64 Ubuntu 12.04. I did a simple apt-get install dokuwiki to install DokuWiki. And I'm trying to follow the directions located here: http://www.dokuwiki.org/auth:radius to get Radius authentication working. Things seemed to install OK. Under Configuration Manager I selected Authentication backend to be "Radius" and filled in the Radius details at the bottom. Now, however, whenever I try to log into the Wiki, my browser gives me the following error: No data received I checked /var/log/apache2/error.log and see this: [Tue Jul 10 22:22:14 2012] [notice] child pid 5270 exit signal Segmentation fault (11) I'm fairly sure the Radius server is setup correctly as it correctly authenticates with my squid proxy and other stuff on the network. But this is about the extent of my Linux troubleshooting skills. Can anyone suggest steps for me to follow to help track down what's causing apache2 to segfault short of attaching with gdb and issuing a set follow-fork-mode? I'm also open to just hearing suggestions for simila

    Read the article

  • Make Money by Building Findable Websites

    Do you want to put up an online business? Then, you need a website to promote your business. A professional quality website will enable you to reach out to more potential customers, no matter how small or big your business is. If you own a business, building findable websites for it will boost up your online presence. Moreover, you also get the chance to generate more profit.

    Read the article

  • check what process was causing the problem of high cpu load

    - by linuxk
    I'm running nginx wordpress server in KVM using 12.04 server x86. It was running very well about 4 month until 2 hours ago. I found that my website is down and no ping response. Virt-manager logged high cpu load(plz see the picture below) before unexpected shut down. I want to know what process caused unexpected shutdown. The following log files make me think my server is attacked. Any suggestions and help would be appreciated. kern.log and syslog showed me same output. Nov 11 03:54:11 www kernel: [1344541.156239] [UFW BLOCK] IN=eth0 OUT= MAC= SRC=0.0.0.0 DST=224.0. 0.1 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 Nov 11 03:54:11 www kernel: [1344541.156315] [UFW BLOCK] IN=eth0 OUT= MAC= SRC=0101:080a:2334:c90 0:0100:0000:0000:0000 DST=ff02:0000:0000:0000:0000:0000:0000:0001 LEN=72 TC=0 HOPLIMIT=1 FLOWLBL=0 PROTO=ICMPv6 TYPE=130 CODE=0 /nginx/access.log showed me 119.235.237.17 - - [11/Nov/2012:03:45:29 +0900] "GET /blog HTTP/1.1" 200 30493 "-" "Yeti/1.0 (NHN Corp.; http://help.naver.com/robots/)" my-server-ip - - [11/Nov/2012:11:05:30 +0900] "POST /wp-cron.php?doing_wp_cron=13 HTTP/1.0" 499 0 "-" "WordPress/3.4.2; http://mywebsite.com" Server turned on in here. 119.235.237.16 - - [11/Nov/2012:11:05:30 +0900] "GET /blog HTTP/1.1" 200 32935 "-" "Yeti/1.0 (NHN Corp.; http://help.naver.com/robots/)"

    Read the article

  • Speed up loading of test results from builds in Visual Studio

    - by Jakob Ehn
    I still see people complaining about the long time it takes to load test results from a TFS build in Visual Studio. And they make a valid point, it does take a very long time to load the test results, even for a small number of tests. The reason for this is that the test results is not just the result of the test run but also all the binaries that were part of the test run. This often also means that the debug symbols (*.pdb) will be downloaded to your local machine. This reason for this behaviour is that it letsyou re-run the tests locally. However, most of the times this is not what the developer will do, they just want to know which tests failed and why. They can then fix the tests and rerun them locally. It turns out there is a way to load only the test results, which is much faster. The only tricky bit is to find the location of the .trx file that is generated during the build. Particularly in TFS 2010 where you often have multiple build agents, which of corse results in different paths to the trx file. Note: To use this you must have read permission to the build folder on the build agent where the build was executed. Open the build result for the build Click View Log Locate the part where MSTest is invoked. When using test containers, it looks like this:   Note: You can actually search in the log window, press Ctrl+F and you will get a little search box at the bottom. Nice! On the MSTest command line call, locate the /resultsfileroot parameter, which points to the folder where the test results are stored Note that this path is local for the build server, so you need to replace the drive letter with the server name: D:\Builds\Project\TestResults to \Project\TestResults">\\<BuildServer>\Project\TestResults Double-click on the .trx file and you will notice that it loads much faster compared to opening it from the build log window

    Read the article

  • Raw Materials - Og, Sumerian DBA, Part 2

    A disruptive innovation raises an old, old question. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • Space-efficient data structures for broad-phase collision detection

    - by Marian Ivanov
    As far as I know, these are three types of data structures that can be used for collision detection broadphase: Unsorted arrays: Check every object againist every object - O(n^2) time; O(log n) space. It's so slow, it's useless if n isn't really small. for (i=1;i<objects;i++){ for(j=0;j<i;j++) narrowPhase(i,j); }; Sorted arrays: Sort the objects, so that you get O(n^(2-1/k)) for k dimensions O(n^1.5) for 2d and O(n^1.67) for 3d and O(n) space. Assuming the space is 2D and sortedArray is sorted so that if the object begins in sortedArray[i] and another object ends at sortedArray[i-1]; they don't collide Heaps of stacks: Divide the objects between a heap of stacks, so that you only have to check the bucket, its children and its parents - O(n log n) time, but O(n^2) space. This is probably the most frequently used approach. Is there a way of having O(n log n) time with less space? When is it more efficient to use sorted arrays over heaps and vice versa?

    Read the article

  • 4 Best Website Building Tips to Learn

    If you want to boost up your exposure in the business industry, you will need an effective marketing tool. And one great tool to use for this purpose is a website. There are many businesses today which have benefited much on having a website. So you think it is hard to build a website? Well actually you need more patience to learn the best website building technique.

    Read the article

  • Bash script runs fine, but not in cron

    - by radiotech
    I have a script that's supposed to record a shoutcast stream for an hour, convert it to mp3, and then save it. The script runs correctly when I run it from the terminal, but I can't seem to get it to run in cron (where it should run every hour at the top of the hour). Here's the line in crontab: 0 * * * * /medialib/tech/bin/recordstream 2>&1 >> /medialib/tech/cron.log and here's the script: #!/bin/bash name="$(date +%s)" mp3_name=$name.mp3 wav_name=$name.wav timeout -sHUP 60m vlc -I dummy --sout "#transcode{channels=2}:std{access=file,mux=wav,dst=/medialib/stream_backup/wav/$wav_name" /medialib/tech/lib/listen.m3u lame --mp3input /medialib/stream_backup/wav/$wav_name /medialib/stream_backup/$mp3_name rm /medialib/stream_backup/wav/$wav_name Thank you! EDIT: Contents of cron.log (This text has been in the log file since it was transferred from an old server where it was working). VLC media player 2.0.8 Twoflower Command Line Interface initialized. Type `help' for help. > Shutting down. VLC media player 2.0.8 Twoflower Command Line Interface initialized. Type `help' for help. > Shutting down.

    Read the article

  • How do I disable nginx sending messages to syslog?

    - by altman
    My nginx sends lots of messages to syslog, but I don't need them. In my nginx.conf: error_log /var/log/nginx-error.log notice; ...... server { access_log off; location / { .... } } but, in my /var/log/message you see Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32172530 kevent() reported about an closed connection (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://www.igoido012.com//vk HTTP/1.1", upstream: "http:////vk", host: "www.igoido012.com", referrer: "http://www.baidu.com/" Nov 22 23:25:09 cache3 nginx: 2011/11/22 23:25:09 [error] 3437#0: *32099531 upstream timed out (60: Operation timed out) while reading response header from upstream, client: , server: , request: "GET http://t.web2.qq.com/channel/poll?msg_id=0&clientid=431509&t=1321975433305 HTTP/1.1", upstream: "http://:80/channel/poll?msg_id=0&clientid=431509&t=1321975433305", host: "t.web2.qq.com", referrer: "http://t.web2.qq.com/proxy.html?v=20110331001" How can I prevent nginx sending messages to my syslog?

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >