Search Results

Search found 31501 results on 1261 pages for 'event log'.

Page 438/1261 | < Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >

  • What statistics app should I use for my website?

    - by Camran
    I have my own server (with root access). I need statistics of users who visit my website etc etc... I have looked at an app called Webalyzer... Is this a good choice? I run apache2 on a Ubuntu 9 system... If you know of any good statistics apps for servers please let me know. And a follow-up question: All statistics are saved in log-files right? So how large would these log-files become then? Possibility to split them would be good, dont know if this is possible with Webalyzer though...

    Read the article

  • snort-mysql not starting on Ubuntu server

    - by Rsaesha
    I am following this tutorial: https://help.ubuntu.com/community/SnortIDS I've set up the database, everything has installed correctly, and I've configured the snort.conf file so it outputs to a database (with creds all filled out ok). When I run /etc/init.d/snort start, it fails but does not produce any error message other than [fail]. The last few lines of /var/log/syslog are: snort[5687]: database: must enter database name in configuration file#012 snort[5687]: FATAL ERROR: My output database line in the snort.conf file is: output database: log, mysql, user=snort password=... dbname=snort host=localhost I have tried it with the commas separating everything, putting quotes around stuff, etc. The password is only made up of letters (after I thought maybe a number was throwing it off).

    Read the article

  • caches domain user on local PC

    - by user630320
    We have a fully working domain in UK and around the world we have user who use VPN ( checkpoint) to connect to or domain. One of the user in USA has a laptop which he never logged on to before ( it does caches the user login details). Does anyone know how to cache user login information on this laptop. I have tried netdom trust to add this user to the laptop but i was not able to do this. At the moment user is logging in with a local administrator account and then using VPN to log on to our domain but when it comes to accessing files on domain user get access deieded. When user try to login it gets There are currently no log on servers available to service the logon request Does anyone know how to add user.

    Read the article

  • Preprocess outgoing email bodies with a linux smtp server/proxy?

    - by jdc0589
    I have an smtp server running locally on my server, and need to edit the contents of email bodies before they actually get sent out. I have tried using EmailRelay to proxy my smtp server with the --filter option to specify a filter/editing executable, but am getting some odd behavior. Currently, I specify an executable shell script as the filter program and all it is supposed to do is append some text to a log file and return 0 so I know it actually got called. The weird thing is the email gets sent but nothing shows up in my log file like it should (but it does when I run the script manualy). If I remove the 'exit 0' statement, the email does not send like I would expect. Are there any other options/suggestions?

    Read the article

  • Open Directory authenticated bind succeeds, but creates incomplete record

    - by Jay Thompson
    I have about a dozen Macs running 10.6.7 or 10.6.8, which are all failing to bind properly to my new 10.7.4 Server OD. I can bind them just fine via Directory Utility or dsconfigldap, and it reports success. However, when I look at the record, it is failing to write the MAC address. Even if I manually update the record with the MAC address, MCX doesn't do anything and clients can't log in to OD accounts. All of the affected clients have hundreds of lines in the /Library/Logs/DirectoryService.error.log like so: 2012-09-15 22:23:18 EDT - T[0x00007FFF70292CC0] - GetMACAddress returned 0x *** bad control string *** 8x I do know that all of these clients were previously managed with the Guest computer account, and I also know that they were all imaged with a DeployStudio image when they were purchased. I've tried dscacheutil -flushcache, but after that I'm drawing a blank. Google has a few hits, but nothing very helpful. Re-imaging would be ideal but probably isn't going to happen. Anyone come across this before?

    Read the article

  • Sending a large number of mails causing problems on CentOS 6 / Plesk 10

    - by papakost
    I have a VPS running CentOS 6. When the system tries to send daily newsletter, after some time (e.g. after sending about 2000 emails), I get error "Unable to send mail" and the system memory goes really high. Till this moment, the mails are delivered normally. The rest symptoms are: I cannot see anything on /var/log/maillog (File seems not to be written) All files on /var/spool/mail have 0 bytes size. From time to time on httpd log I get errors like: /usr/sbin/sendmail: error while loading shared libraries: libc.so.6: cannot open shared object file: Error 23 "Activate mail service on domain" setting in Plesk is deactivated. Any idea on what's going wrong here?

    Read the article

  • Is there a suitable chain for iptables when eth is in Promisc mode?

    - by user1495181
    I have a fron-end machine. Machine have2 eth cards. I want to use netfilter queue to do some checks on the packets. I set eth like this: ifconfig eth0 0.0.0.0 promisc up ifconfig eth1 0.0.0.0 promisc up I want to have an iptable rule like this(only example): iptables -A INPUT -i eth0 -j LOG --log-prefix " eth0 packet " but the packet is no passed through the iptables ,because it dosnt target to this MAC. Promisc mode didnt help. I saw that there is a way to add iptables chain for PROMISC, but need compilation... Is there any simplier way to have iptables rule when packet is not target to this eth. Currently i bypass this by creating a bridge between 2 eth and put rule on the FORWARD, but i done want to create bridge.

    Read the article

  • Packets marked INVALID in FORWARD rule

    - by Raphink
    I have a firewall that has 3 IP aliases on 1 physical interface. Packets get dropped between these 3 interfaces (either ICMP, HTTP, or anything else). We tracked it down to these packets being marked INVALID in the FORWARD rule and dropped due to the this rule: chain FORWARD { policy DROP; # connection tracking mod state state INVALID LOG log-prefix 'INVALID FORWARD DROP: '; mod state state INVALID DROP; mod state state (ESTABLISHED RELATED) ACCEPT; } (That is, we see the INVALID FORWARD DROP logs in dmesg) What could be causing this?

    Read the article

  • What is the standard technique for shifting the frames of a sprite according to user input?

    - by virtual__
    From my own experience, I developed two techniques for changing the sprites of a character that's reacting to user input -- this in the context of a classic 2D platformer. The first one is to store all character's pixmaps in a list, putting the index of the currently used pixmap in an ordinary variable. This way, every time the player presses a key -- say the right arrow for moving the character forward -- the graphics engine sees what's the next pixmap to draw, draws it, and increments the index counter. That's a pretty common approach I believe, the problem is that in this case the animation's quality depends not only on the number of sprites available but also on how often your engine listens to user input. The second technique is to actually play an animation every key press event. For this you can use any sort of animation framework you want. It's only necessary to set the timer, the animation steps and to call the animation's play() method on your key press event handler. The problem with that approach is that is lacks responsiveness, since the character won't react to any input while the current animation is still being played. What I want to know is whether you are using one of these techniques -- or something similar -- in your games, or whether there's a standard method for animating sprites out there that's widely known by everybody but me.

    Read the article

  • How does the "Last Result" column of Scheduled Tasks in Windows Server 2003 get set from a process or script?

    - by leeand00
    The Last Result column of the Scheduled Tasks Window on Windows Server 2003, displays the result of the execution of the .exe, .vbs, .ps1, .bat, .cmd, etc... that has been run at the scheduled time. There is also an archived history of this value that appears in the Scheduled Tasks Log (Found on the Scheduled Tasks Window under the Advanced->View Log) Now my question is, if I'm running a scheduled task that is a .exe, .vbs, .ps1, .bat, .cmd, etc... how do I use that process to return a specific Last Result when the process ends? P.S. If you think this question should be split up into smaller parts since I'm painting pretty broadly with it, just let me know and I'll split it into subsequent smaller questions

    Read the article

  • How to scrub a document of all text between brackets with find and replace

    - by sam
    I have a log file and I need to find all instances of <password> hash here </password> and remove the hash and replace it with some dummy text like aaa-aaa-aaa-aaaa. The recurring search argument is anything that matches a bracket that starts with <password> and ends with </password>. All the hashes being replaced will be different. What's the easiest way to go a bout this? The log is on a windows machine. Probably easiest would be to use MS word for me, unless it's achievable with wordpad, notepad, or some other light weight editor like textpad. thanks

    Read the article

  • rewrite map (prg:) never finishes

    - by SooDesuNe
    using Apache and a prg type rewrite map. My map looks like: #!/usr/bin/perl $| = 1; # Turn off buffering while (<STDIN>) { print "someothersite.com"; } the rewrite rule declared in httpd.conf is: RewriteMap app_map prg:/file/path/test.pl RewriteRule (\/[\w]+)(\/[^\#\s]+)?$ http://${app_map:$1}$2 [P,L] And the log files show: init rewrite engine with requested uri /a/testlink.html applying pattern '(\/[\w]+)(\/[^\#\s]+)?$' to uri '/a/testlink.html' It appears like test.pl is never giving control back to apache, when the map is successfully found I expect to see this output in the log file: map lookup OK: map=app_map key=/a -> val=someothersite.com Why is my map not returning control back to apache?

    Read the article

  • ssh, "Last Login", `last` and OS X

    - by allentown
    I have hit the googles as much as I can on this, being specific to OS X, I am not finding an answer. Nothing is wrong, but curiosity levels are high. $ssh [email protected] Password: Last login: Wed Apr 7 21:28:03 2010 from my-laptop.local ^lonely tylenol^ Line 1 is my command line 2 is the shell asking for the password line 3 is where my question comes from line 4 comes out of /etc/motd I can find nothing in ~/ of an of the .bash* files that contains the string "Last Login", and would like to alter it. It performs some type of hostname lookup, which I can not determine. If I ssh to another host: $ssh [email protected] Last login: Wed Apr 7 21:14:51 2010 from 123-234-321-123-some.cal.isp.net.example hi there, you are on box 456 line 1 is my command line 2 is again, where my question comes from line 3 is from /etc/motd *The dash'd IP address is not reversed On this remote host, I have ~/.ssh and it's corresponding keys set up, so there was no password request Where is the "Last Login:" coming from, where does the date stamp come from, and most importantly, where does the hostname come from? While on [email protected] (box 456) $echo hostname remote.location.example456.com Or with dig, to make sure I have rDNS/PTR set up, for which I am not authoritative, but my ISP has correctly set... $dig -x 123.234.321.123 PTR remote.location.example456.com or $dig PTR 123.321.234.123.in-addr.arpa. +short remote.location.example456.com. my previous hostname used to be 123-234-321-123-some.cal.isp.net.example, which I set with hostname -s remote.location.example456.com, because it was obnoxious to see such a long name. That solves the value of $echo hostname which now returns remote.location.example456.com. Mac OS X, 10.6 is this case, does seem to honor: touch ~/.hushlogin If leave that file empty, I get nothing on the shell when I login. I want to know what controls the host resolution of the IP, and how it is all working. For example, running last reports a huge list of my logins, which have obtusely long hostnames, when they would be preferable to just be remote.location.example456.com. More confusing to me, reading the man page for wtmp and lastlog, it looks like lastlog is not used on OS X, /var/log/lastlog does not exist. Actually, none of these exist on 10.5 or 10.6: /var/run/utmp The utmp file. /var/log/wtmp The wtmp file. /var/log/lastlog The lastlog file. If I am to assume that the system is doing some kind of reverse lookup, I certainly do not know what it is, as it is not an accurate one.

    Read the article

  • On installing nvidia drivers on 12.10 I get "Bad return status for module build on kernel: 3.5.0-19-generic (x86_64)"

    - by james
    New Ubuntu user - just recently made the mistake of trying a different nvidia driver. I'd managed to get the last (nvidia-current) one working through software sources a few weeks ago. The other day I tried to cross over to nvidia-experimental-310 and this produced a system error. Swapping back and forth between proprietary drivers now always causes an error and I can't get any of them to work. Installing through the terminal I get this error message every time: Building initial module for 3.5.0-19-generic Error! Bad return status for module build on kernel: 3.5.0-19-generic (x86_64) Consult /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log for more information On rebooting, I end up with the crappy screen resolution and the thick black border around the screen. I use gksudo software-properties-gtk to bring up sources, where I can change back to the nouveau driver, which restores my screen. After that I can't find /var/lib/dkms/nvidia-experimental-310/310.14/build/make.log so I can't tell you what's inside. Any ideas what might be preventing the nvidia driver from installing? SOLUTION FOUND Okay - so I have a workaround. This is what has worked: Upgrade to kernel 3.7.0 as detailed here upgrade to latest version of the nvidia drivers as detailed here No idea what was happening with kernel 3.5.0-19, but this seems to be better. A little slower maybe on boot, but after days of messing around it's nice to have something that works.

    Read the article

  • How do I map some subdirectories to run alongside a Drupal site?

    - by paradroid
    I have a Drupal site running on Apache using the following vhosts file: <VirtualHost xx.xx.xx.xx:80> ServerName bananas.net ServerAlias www.bananas.net DocumentRoot /var/www/drupal/ RewriteEngine On RewriteCond %{HTTP_HOST} !=bananas.net [NC] RewriteRule ^(.*)$ http://bananas.net$1 [L,R=301] <Directory /var/www/bananas.net/> Options -Indexes FollowSymlinks AllowOverride All Order allow,deny Allow from all </Directory> CustomLog ${APACHE_LOG_DIR}/access.log combined ErrorLog ${APACHE_LOG_DIR}/error.log </VirtualHost> I set it up some time ago, so I am not sure what the <Directory /var/www/bananas.net/> directive was meant for. That directory is currently empty. With the vhosts file the way it is, does the Directory directive have any effect at all? I want to add some content which is separate from the Drupal site. How do I add sub-directories within /var/www/bananas.net/ which can be accessed alongside the Drupal site running at the root? As they have nothing to do with the Drupal site, I want to keep the files separate, but still using the same domain.

    Read the article

  • What are possible reasons why a calendar entry in OWA is at a different time than in Outlook?

    - by Ken Pespisa
    We have two Exchange 2003 servers, our primary server and a front-end server that hosts Outlook Web Access (OWA). When I open my boss' calendar via Outlook 2007 (from my Outlook client as well as hers) I see the event scheduled for 10:30 am. When I open her calendar via Outlook Web Access, the same event is scheduled for 4:30 am. I don't understand Exchange well-enough to imagine how this is possible. If you have any ideas why this could be happening, I greatly appreciate it. I'd also very much appreciate any insight you have to how this could be possible. There must be some cached data on the front-end server that causes the calendar entry to appear at a different time, I suppose. Any insight into how Exchange manages that cache and where I could look for an issue would be very helpful. Thank you!

    Read the article

  • Safari Private Browsing and gmail

    - by John Smith
    I have two gmail accounts. If I follow the following steps then Safari will not let me log in to any other gmail account Go the gmail, log in as user1 Enable Private browsing Go to one website Disable Private browsing Go to gmail and logout. At this point gmail will not let me switch to user2. I have to quit the whole browser before I get that option. Is there a way to fix this? I am not trying to open two gmail accounts at the same time. Just one after the other. As long as I do not enter Private Browsing mode between the two logins I can switch between account1 and account2. Also, I am not changing browser to Firefox

    Read the article

  • MySQL server hangs after approximately 1 month (CentOS)

    - by user25061
    Hey guys, I've got a web server running Apache/PHP and MySQL (CentOS), and MySQL seems to hang once every month or so. As far as I can tell, there are a few slow queries that are being resolved, but other than that I can't really see any reason why MySQL would hang. I'm having problems determining the problem - nothing is showing up in /var/log/mysqld.log, and again there are a few slow queries, but nothing out of the ordinary. Load is average at the time of the crashes... Can I please get some hints on how to work out the issue? I can't reproduce on our staging environment, so I'm a little stuck.

    Read the article

  • State Changes in a Component Based Architecture [closed]

    - by Maxem
    I'm currently working on a game and using the naive component based architecture thingie (Entities are a bag of components, entity.Update() calls Update on each updateable component), while the addition of new features is really simple, it makes a few things really difficult: a) multithreading / currency b) networking c) unit testing. Multithreading / Concurrency is difficult because I basically have to do poor mans concurrency (running the entity updates in separate threads while locking only stuff that crashes (like lists) and ignoring the staleness of read state (some states are already updated, others aren't)) Networking: There are no explicit state changes that I could efficiently push over the net. Unit testing: All updates may or may not conflict, so automated testing is at least awkward. I was thinking about these issues a bit and would like your input on these changes / idea: Switch from the naive cba to a cba with sub systems that work on lists of components Make all state changes explicit Combine 1 and 2 :p Example world update: statePostProcessing.Wait() // ensure that post processing has finished Apply(postProcessedState) state = new StateBag() Concurrently( () => LifeCycleSubSystem.Update(state), // populates the state bag () => MovementSubSystem.Update(state), // populates the state bag .... }) statePostProcessing = Future(() => PostProcess(state)) statePostProcessing.Start() // Tick is finished, the post processing happens in the background So basically the changes are (consistently) based on the data for the last tick; the post processing can a) generate network packages and b) fix conflicts / remove useless changes (example: entity has been destroyed - ignore movement etc.). EDIT: To clarify the granularity of the state changes: If I save these post processed state bags and apply them to an empty world, I see exactly what has happened in the game these state bags originated from - "Free" replay capability. EDIT2: I guess I should have used the term Event instead of State Change and point out that I kind of want to use the Event Sourcing pattern

    Read the article

  • Video Bug after a fresh installation

    - by Matan
    Hello, I just installed Ubuntu 10.10 (I'm brand new to Ubuntu) on my laptop. I seem to have a video bug that I don't know how to deal with. When the log-in screen comes up, the boxes are way off in the corner of the screen (partially off it). When I enter my password, the screen goes black for a few seconds, then returns to the login screen. I can open a Terminal window and enter my login info that way. When I go back to Gnome (Ctrl+Alt+F7 or whatever) it shows me as "logged in" but I still can't get to the desktop. If anyone has any advice, I'd love to hear it--just try to use simple language, please, since I really don't know Linux at all yet! I'm running an Averatec 3700 Series: Mobile AMD Sempron 3000+ 512 MB DDR, 80 GB HDD After looking at this question I tried going in through Failsafe mode (took me a while to figure out the hold-shift-while-booting thing _<) and playing around with the resolution. Setting a somewhat wider resolution did seem to fix things so that I can log into regular GNOME, I think. I'm not sure if this fix will persist, but it seems like it might!

    Read the article

  • Free Xsigo Technical Pre-sales workshop for Selected Partners !

    - by mseika
    In 2012 Oracle acquired Xsigo, a developer of network I/O virtualisation solutions. This acquisition compliments Oracle’s extensive virtualisation portfolio. With Oracle Virtual Networking products (Xsigo) you can: Virtualise connectivity from any server to any storage and any network. Reduce datacentre complexity by 70% Cut infrastructure expenses by up to 50% Benefits to Channel Partners: Offer a unique proposition that your competitors can’t match. Provide an innovative solution that delivers more performance at less cost. High margins that help sell more products and services. This course is aimed at Technical Pre-Sales Consultants equipping them to provide detailed demos, and architect RFP feedback and customer solutions. The language of this event is French. WHEN24th September 2013 WHEREOracle France 15, boulevard Charles De Gaulle92715 COLOMBES FEESFree of charge 09.00: Welcome, Coffee & Introduction 09.30: Value Propositions, Architecture & Use Cases 11.30: Build a OVN Web Quote & TCO 12.30: Lunch 13.30: Competitive Summary 14.00: Design Scenario Workshop 15.45: Questions/Opportunities  REGISTRATION: Register via this link as soon as possible, 14th june, latest. Note that we have only 20 seats in total for this event. Note that after 14th june we will release free seats for other organizations to register. We look forward to your participation! What we expect from you: You will bring your own laptop. Recommended browser is Firefox 10 ESR. You have checked the material and conducted the assessments. You will be flexible in terms of Agenda and Progress as we intend this to be more of a Workshop having Dialogue rather than sticking tightly into the tentative timeline. What this is not: This PartnerLab does not replace Oracle University Trainings. This PartnerLab does not lead to a Certification as such. This PartnerLab does not enable Partners to full and complete implementation skills.

    Read the article

  • PuTTY automatically supply password

    - by Kyle Cronin
    I have a situation where I need to have PuTTY (or another SSH client for Windows) automatically log into another machine via SSH. I realize that this isn't a good idea security-wise, but unfortunately I'm constrained by the limitations both on the client and the server. The best solution would be to have a shortcut or script on the desktop that, when double clicked, will connect to the server and automatically log in. Can I do this with PuTTY? I am willing to explore public key authentication, but I'm not sure where the PuTTY key resides or how to copy it to the server, as the app starts automatically upon login.

    Read the article

  • Prioritize file sharing performance in Windows Server 2008

    - by cmbrnt
    I've got a server running Windows Server 2008, and use it mainly for sharing files throughout the domain from a number of disks. It's running on VMware ESXi 4.0, in case that matters. My problem is that when I log in to the server to check user permissions etc, the access speed the files on the remote disks almost grinds to a halt. I havn't been able to measure the speeds, but I would guess it slows down to about 100kB/s as soon as I log in. This is on a gigabit network and the problems are equal for all users, even the ones connected to the same switch as the server. I've assigned 2 GB RAM to the server, and reserved it 1,5Ghz processor power. I don't have to do anything special on the server for this halt to occur. How can I make sure file sharing is prioritized on the server, so no matter what applications I'm using it will always make sure file sharing works properly? Could this be a VMware issue?

    Read the article

  • Server to server replication and CPU and 32k\ corrupt doc

    - by nick wall
    Summary: if database contains a doc with 32K issue or corrupt, on server to server replication it causes marked increase in CPU in nserver.exe task, which effectively causes our server(s) to slow right down. We have a 5 server cluster (1 "hub" and 4 HTTP servers accessed via reverse proxy and SSO for load balancing and redundancy). All are physically located next to each other on network, they don't have dedicated network\ ports for cluster or replication. I realise IBM recommendation is dedicated port for cluster. Cluster queues are in tolerance and under heavy application user load, i.e. the maximum number of documents are being created, edited, deleted, the replication times between servers are negligible. Normally, all is well. Of the servers in the cluster, 1 is considered the "hub", and imitates a PUSH-PULL replication with it's cluster mates every 60mins, so that the replication load is taken by the hub and not cluster mates. The problem we have: every now and then we get a slow replication time from the hub to a cluster mate, sometimes up to 30mins. This maxes out the nserver.exe task on the "cluster mate" which causes it to respond to http requests very slowly. In the past, we have found that if a corrupt document is in the DB, it can have this affect, but on those occasions, the server log will show the corrupt doc noteId, we run fixup, all well. But we are not now seeing any record of corrupt docs. What we have noticed is if a doc with the 32K issue is present, the same thing can happen. Our only solution in that case is to run a : fixup mydb.nsf -V, which shows it is purging a 32K doc. Luckily we run a reverse proxy, so we can shut HTTP servers down without users noticing, but users do notice when a server has the problem! Has anyone else seen this occur? I have set up DDM event handlers for many of the replication events. I have set the replication time out limit to 5 mins (the max we usually see under full user load is 0.1min), to prevent it rep'ing for 30mins as before. This ia a temporary work around. Does anyone know of a DDM event to trap the 32K issue? we could at least then send alert. Regarding 32K issue: this prob needs another thread, but we are finding this relatively hard to find the source of the issue as the 32K event is fairly rare. Our app is fairly complex, interacting with various other external web services, with 2 way data transfer. But if we do encounter a 32K doc, we can't look at field properties, so we can't work out which field has issue which would give us a clue as to which process is culprit. As above, we run a fixup -V. Any help\ comments on this would be gratefully received.

    Read the article

  • Why does try_files append each path together?

    - by Tom
    I'm using try_files like this: http { server { error_log /var/log/nginx debug; listen 127.0.0.1:8080; location / { index off default_type application/octet-stream; try_files /files1$uri /files2/$uri /files3$uri; } } } In the error log, it's showing this: *[error] 15077#0: 45399 rewrite or internal redirection cycle while internally redirecting to "/files1/files2/files3/path/to/my/image.png", client: 127.0.0.1, server: , request: "GET /path/to/my/image.png HTTP/1.1", host: "mydomain.com", referrer: "http://mydomain.com/folder" Can anyone tell me why nginx is looking for /files1/files2/files3/path/to/my/image.png instead of /files1/path/to/my/image.png, /files2/path/to/my/image.png and /files3/path/to/my/image.png? Thanks

    Read the article

< Previous Page | 434 435 436 437 438 439 440 441 442 443 444 445  | Next Page >