Search Results

Search found 14435 results on 578 pages for 'disk usage'.

Page 446/578 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • ClassNotFoundException returned for all plugins

    - by razumny
    I am trying to use a Java applet (any Java Applet), but I always get a messages saying "Error. Click for details". When I do so, the pop-up says: Application Error ClassNotFoundException jreVerification.class When I click the "Details" button, all I see is the following: Java Plug-in 10.7.2.10 Using JRE version 1.7.0_07-b10 Java HotSpot(TM) Client VM User home directory = C:\Users\razumny ---------------------------------------------------- c: clear console window f: finalize objects on finalization queue g: garbage collect h: display this help message l: dump classloader list m: print memory usage o: trigger logging q: hide console r: reload policy configuration s: dump system and deployment properties t: dump thread list v: dump thread stack x: clear classloader cache 0-5: set trace level to <n> ---------------------------------------------------- I am running Windows 7 Professional, and am up to date on patches. The problem occurs in Google Chrome, Mozilla Firefox and Internet Explorer, regardless of what Java Applet I am running. The error I quoted above came from here: http://java.com/en/download/installed.jsp?detect=jre I have attempted the following to rectify the issue: Uninstall and reinstall Java Uninstall Java, reboot, install Java Uninstall Java, delete all registry entries, reboot, install Java In addition, I have run Malware and Virus scans, none of which have shown anything of relevance. At this point, I am at my wit's end, and so, I turn to you.

    Read the article

  • What Is The Proper Laptop Battery Care While Running Laptop Solely On Battery?

    - by Boris_yo
    Because of convenience, I had to move my laptop to another room away from room where I always ran laptop on UPS without using battery. Since so far I always run laptop on battery, I question the proper usage to prolong battery life. Currently I run laptop on battery with power supply so battery is constantly being charged until it is full 100% and when it is, I disconnect power supply and continue working until battery meter shows 10% remaining. That's when I plug in power supply and let it charge until 100% once again while I work. But it takes a lot of time to fully charge laptop while working since my power supply is 60W which should be the reason of such slow charge and I think the kind of charger that I use is express charger. The thought of charging laptop until full, all while doing my work makes me think that if it takes way more time to charge, it might keep battery running warm for the period of charging time which brings me to question about whether I should keep running laptop as I've described above or it would be better to leave power supply constantly connected to laptop to keep battery between 99%-100%? On one hand it won't keep battery warm but it will try to frequently supply charge to battery once it gets 99% to replenish charge to 100% (which might reduce battery life?). On the other hand if I'll keep working solely on battery and recharge it when below 10%, the battery will get warm but only when charged. Can anybody suggest the correct way of running laptop on battery to ensure better battery life? Dell Latitude E6420 Windows 7 64-bit

    Read the article

  • Missing MB on a GPT partioned SSD

    - by pisswillis
    I recently installed Arch Linux on an Intel 40GB SSD. I used GPT for partioning (via GNU parted) and created the following partions: /dev/sda1 : 1 MB, no FS, flag=bios_grub /dev/sda2 : 30MB, /boot, ext2, flag=boot /dev/sda3 : 20GB, /home, ext4 /dev/sda4 : ~20GB, /, ext4 After struggling to install grub2 from the livecd environment (which I finally did via grub-install /dev/sda --root-directory=/mnt/ --no-floppy --force) I got a working system. However, when I was inspecting disk usage with df I noticed that my home partition had around 170MB of used space on it. This surprised me because the only things on /home were one users .bashrc, .bash_history, and .lesshst. du confirmed that there was only a few KB of space being used on /home. Why does df report approximately 170MB being used when du does not? Is this space "gone forever", or can I regain it by repartioning and/or reinstalling? When I installed grub2 it said something along the lines of "your embed area is too small", and that I could "use BLOCKLISTS, but BLOCKLISTS are UNRELIABLE". In the end the only way I could get a system booting from the SSD was to use blocklists via the grub-install --force flag. Is this related to the mysterious missing 170MB? Thanks

    Read the article

  • MySQL consuming all system memory on INSERT ... SELECT

    - by siete
    The mysql daemon is getting killed because Linux is reaching out of memory: Oct 24 07:41:23 <hostname> kernel: [82297.673701] Out of memory: kill process 13816 (mysqld) score 1839626 or a child There is a link with some workaround on this. That only happen when executing a query INSERT ... SELECT with a very huge resulset. MySQLTuner script displays that maximum theorical memory is less than 8GB, but top and munim shows that is getting over all RAM and swap available: [--] Total buffers: 560.0M global + 72.2M per thread (100 max threads) [OK] Maximum possible memory usage: 7.6G (43% of installed RAM) I'm tried to tune some options with not results, there are the relevant ones: skip-locking max_connections = 100 key_buffer_size = 512M max_allowed_packet = 32M table_open_cache = 2000 open_files_limit = 3000 sort_buffer_size = 16M read_buffer_size = 16M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 4 query_cache_size = 16M query_cache_limit = 2M thread_concurrency = 4 join_buffer_size = 32M tmp_table_size = 32M max_heap_table_size = 32M query_cache_limit = 8M bulk_insert_buffer_size = 64M myisam_max_sort_file_size = 50GB myisam_mmap_size = 10GB And there is a system resume: OS: Linux Debian "Squeeze" 6.0.8 (upgraded yesterday) RAM: 18GB Swap: 18GB MySQL: 5.1.72-2 (official Debain release) At this moment, update or change OS or MySQL version is not possible, there is any option that can help and i missed? Sorry by my english, and thank you in advance! Edit: I'm only using MyISAM tables, and cannot change to InnoDB.

    Read the article

  • Why do hosts prefer Linux to Windows Server?

    - by iconiK
    So far I see a HUGE majority of hosts provide only Linux shared hosting, providing Windows only to VPS (or even to only dedicated servers). Why is it so? While Windows is a lot more expensive than Linux (though it depends on a lot of factors, not just initial and support license cost), it also provides ASP.NET, IIS and of course, Microsoft SQL Server. I know in the past it might have been because of cPanel being Linux only but now they have a Windows version. But still, why is Linux predominantly used on shared hosting? PHP works on both systems. IIS can be (and probably is) faster. MySQL runs on both systems as well. cPanel has a Windows version. Python, Perl, Ruby, all run on Windows as well. You even have MS SQL Server Express, which I find more superior than MySQL in both speed and features. Access is there for low usage requirements, as is SQLite (which is so great for quick small stuff). And with PowerShell you have a good alternative to the Unix shell. EDIT: I am looking for common reasons, I realize each hosting company (and/or it's clients) may have different needs. This becomes very important when you get to VPS or Cloud which give you a full operating system to use.

    Read the article

  • Intermittent Trouble Entering Hibernate on WinXP

    - by kquinn
    My personal desktop, running 32-bit Windows XP SP2 (with 4GB RAM, 2.75GB addressable, swap disabled, hiberfil.sys existing and contiguous on C:\; SP3 is not installed because SP2 has been working fine and I do not want to re-qualify with SP3 just for sheer perversity) typically gets hibernated at night. For a long time this worked great, but recently the machine has had trouble entering hibernation. Sometimes when I press my power button (configured to hibernate), the box will start the procedure for hibernating (i.e., go to the blue "Windows XP" background logo and display a message about entering hibernation), but before displaying the usual blue-on-black hibernation progress bar it will drop back to the desktop. No error messages appear, on screen or in the system log. The only record of unsuccessful hibernation attempts in the system log, which proudly proclaims that "The Windows Image Acquisition (WIA) service entered the running state." once per failed hibernation attempt. The problem is almost certainly resource related: if I then close one or more applications which are running, and repeat the exact same process, the machine will hibernate perfectly. There does not appear to be a reliable high-water mark for virtual or physical memory use, below which the machine is guaranteed to hibernate; it's different every time (though typically, below about 1.1–1.4 GB memory usage seems to be where hibernate succeeds most often). Memory may not even be the relevant resource; as far as I know, it could also be handles or sockets. This behavior is relatively recent: it has only started in the last few months; before then, I could hibernate reliably no matter what the current resource use of the system. This machine claims to have hotfix Q909095 installed, but since the symptoms of my problem match KB909095 rather well, I'm suspicious if this fix is actually working as intended. Any ideas on how to fix this or where to start debugging?

    Read the article

  • Looking for a "light" compositing manager for GNOME

    - by detly
    I have an HP Pavilion DM3 (graphics is nVidia GeForce G105M), running Debian Squeeze with GNOME 2.30. My preference for DE is Gnome + Metacity + Nautilus. I'd like to use Docky, but it requires compositing. So I'm looking for a relatively "light" compositing manager. I realise that "light" is ambiguous, but I basically want something that won't chew through my notebook's batteries because of CPU or GPU usage. I know that Metacity is capable of compositing, but as far as I'm aware it's still testing. Some people report that it's smooth and lightweight, others claim that it eats up processor time. I've also seen references to a problem with nVidia, but no actual details. I'm not averse to Compiz, but I haven't used it before and I don't know what to expect in terms of "weight." And maybe there's something else I haven't heard of. So can anyone recommend anything? Or dispel my idea that Metacity is not the right tool for the job? (Originally posted on GNOME forums.)

    Read the article

  • Solr startup script problem

    - by Camran
    I have installed solr and it works finally... I have now problems setting it up to start automatically with a start command. I have followed a tutorial and created a file called solr in the /etc/init.d/solr dir... Here is that file: #!/bin/sh -e # SOLR auto-start # # description: auto-starts solr engine # processname: solr-production # pidfile: /var/run/solr-production.pid NAME="solr" PIDFILE="/var/run/solr-production.pid" LOG_FILE="/var/log/solr-production.log" SOLR_DIR="/etc/jetty" JAVA_OPTIONS="-Xmx1024m -DSTOP.PORT=8079 -DSTOP.KEY=stopkey -jar start.jar" JAVA="/usr/bin/java" start() { echo -n "Starting $NAME... " if [ -f $PIDFILE ]; then echo "is already running!" else cd $SOLR_DIR $JAVA $JAVA_OPTIONS 2> $LOG_FILE & sleep 2 echo `ps -ef | grep -v grep | grep java | awk '{print $2}'` > $PIDFILE echo "(Done)" fi return 0 } stop() { echo -n "Stopping $NAME... " if [ -f $PIDFILE ]; then cd $SOLR_DIR $JAVA $JAVA_OPTIONS --stop sleep 2 rm $PIDFILE echo "(Done)" else echo "can not stop, it is not running!" fi return 0 } case "$1" in start) start ;; stop) stop ;; restart) stop sleep 5 start ;; *) echo "Usage: $0 (start | stop | restart)" exit 1 ;; esac Whenever I do solr -start I get this error: "Error occurred during initialization of VM Could not reserve enough space for object heap" I think this is because of the file above... Also here is where I have solr installed: var/www/solr and here is the start.jar file located: var/www/start.jar Help me out if you know whats causing this. Thanks BTW: OS is ubuntu 9.10

    Read the article

  • HELP PLEASE!!!! External component has thrown an exception. ASP.NET ASPX PAGE POST

    - by Brandon
    I have an aspx page that communicates with a webservice I have. It connects to an SQL Server database on my virtual dedicated server. With just a little usage, I get this error External component has thrown an exception. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Runtime.InteropServices.SEHException: External component has thrown an exception. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SEHException (0x80004005): External component has thrown an exception.] Luxand.FSDK.Initialize(String DataFilesPath) +0 WebService.onLoad() +70 WebService..ctor() +91 facematch.btn_submit_Click(Object sender, EventArgs e) +218 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +105 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +107 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +7 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +11 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +33 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1746

    Read the article

  • Knowledge and user generated content management system to track files, research, proposals, etc.?

    - by Eshwar
    I'll try keep it short. Here's the scenario: We have employees all over the world performing similar work i.e. research, generating powerpoint slides, word documents, graphics, etc. Many times a lot of this previous work can be reused for another future project. The current arrangement is email and phone calls which as you would agree is quick if you know where to look but otherwise archaic and very very inefficient. So I am looking for software that will allow me to do the following: Tag files e.g. an investor presentation on cellphone usage in kenya would be tagged investor, cellphone, kenya Manage references e.g. if we read something on the internet, should be able to paste that link in some fashion and tag it as above. Preferably cloud based so that it can be accessed by anybody and additionally would be nice (though NOT must) to have access levels (director, manager, everyone) A nice interface that non technically savvy folks can warm up to ;) A desktop app would be handy so that people don't always have to click upload or something A tree based system is inefficient in this case because content is usually linked across branches and also people might not quite agree on one format of a tree. Tagging works around this very nicely. What I have considered so far: Evernote (for its more professional look) Springpad (for its versatility with content) Mendeley (this is a research manager and in some ways ideal, but i fear its limited to PDFs) The goal is that when somebody wants to look for a document, they don't have to ask a colleague, they can just search with keywords and all relevant information shows up. Thanks!

    Read the article

  • Loading a big database dump into PostgreSQL using cat

    - by RussH
    I have a pair of very large (~17 GB) database dumps that I want to load into postgresql 9.3. After installing the database packages, learning more or less how to use them, and fiddling around a little on various StackExchange pages (particularly this question), it looks like a proper command for me to use is something like: cat mydb.pgdump | psql mydb because of the format the dump is in. My machine has 16 GB of RAM, and I'm not familiar with the cat command but I do know that my RAM is 99% exhausted and the database is taking a while to load. My machine isn't non-responsive to the point of hanging; I can run other commands in other terminal windows and have them execute at a reasonable clip, but I am wondering if cat is the best way to pipe in the file or if something else is more efficient? My concern is that maybe cat could be using up all the RAM so the database doesn't have much to work with, throttling its performance. But I'm new to thinking about RAM issues like this and don't know if I'm worrying about nothing. Now that I think about it, this seems to be more of a question about cat and its memory usage than anything else. If there is a more appropriate forum for this question please let me know. Thanks!

    Read the article

  • big cpu load on vmware server / linux

    - by dezfafara
    Hi, I currently using a server 2.x hosting 4 virtual machines on a linux system Today, on my physical server, I saw an enormous load average: this is the "top" of the server, illustrating my 4 virtual guests. top - 11:02:02 up 194 days, 23:09, 5 users, load average: 18.78, 12.05, 13.55 Tasks: 113 total, 4 running, 109 sleeping, 0 stopped, 0 zombie Cpu0 : 71.6%us, 19.0%sy, 0.0%ni, 8.8%id, 0.0%wa, 0.3%hi, 0.3%si, 0.0%st Cpu1 : 74.3%us, 10.4%sy, 0.0%ni, 15.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 72.5%us, 17.6%sy, 0.0%ni, 9.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 79.5%us, 4.6%sy, 0.0%ni, 16.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8178884k total, 8129980k used, 48904k free, 134904k buffers Swap: 10490436k total, 148k used, 10490288k free, 6129728k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7312 root 6 -10 1149m 921m 559m R 97 11.5 107947:09 vmware-vmx 6995 root 6 -10 779m 687m 317m R 92 8.6 107374:31 vmware-vmx 6693 root 6 -10 880m 659m 409m S 85 8.3 76947:33 vmware-vmx 12937 root 6 -10 960m 719m 523m S 75 9.0 67219:49 vmware-vmx In bold are the cpu usage for my 4 virtuals guests These guests are running on a linux system, and the appropriate process are usually 5% - 15% of cpu I don't understang why , since a few days I have this big problem. This is the "top" on a virtual guest which is at 95% of cpu load top - 11:23:15 up 194 days, 23:13, 4 users, load average: 0.25, 0.47, 0.59 Tasks: 92 total, 2 running, 90 sleeping, 0 stopped, 0 zombie Cpu(s): 1.4%us, 7.7%sy, 0.0%ni, 90.5%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 382296k total, 369732k used, 12564k free, 145156k buffers Swap: 979924k total, 13956k used, 965968k free, 86988k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3691 root 20 0 23948 1148 960 S 13.0 0.3 15339:23 vmware-guestd 3840 root 20 0 19880 584 512 S 7.7 0.2 1729:17 hald-addon-stor This virtual guest state is ok ... If anyone has any ideas .. Thanks

    Read the article

  • memory tuning with rails/unicorn running on ubuntu

    - by user970193
    I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7. It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely. My question concerns memory usage, and what concerns I should have with what I am seeing. (if any) Here is the scenario: Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour. If I drop my page caches using the linux command echo 1 /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours. Before: total used free shared buffers cached Mem: 7130244 5005376 2124868 0 113628 422856 -/+ buffers/cache: 4468892 2661352 Swap: 33554428 0 33554428 After: total used free shared buffers cached Mem: 7130244 4467144 2663100 0 228 11172 -/+ buffers/cache: 4455744 2674500 Swap: 33554428 0 33554428 My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour? FWIW, my Unicorn config: worker_processes 15 listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog = 1024 listen 8080, :tcp_nopush = true timeout 180 pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid" GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true before_fork do |server, worker| STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK" print_gemfile_location defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! defined?(Resque) and Resque.redis.client.disconnect old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # already killed end end File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)} end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection defined?(Resque) and Resque.redis.client.connect end Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour? tia

    Read the article

  • Pip installs on Archlinux fails to build egg

    - by stmfunk
    I was trying to install nltk on my Archlinux server but it repeatedly fails with the following error output /usr/lib/python3.3/distutils/dist.py:257: UserWarning: Unknown distribution option: 'entry_points' warnings.warn(msg) /usr/lib/python3.3/distutils/dist.py:257: UserWarning: Unknown distribution option: 'zip_safe' warnings.warn(msg) /usr/lib/python3.3/distutils/dist.py:257: UserWarning: Unknown distribution option: 'test_suite' warnings.warn(msg) usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2 ...] or: setup.py --help-commands or: setup.py cmd --help error: invalid command 'bdist_egg' /tmp/pip_build_root/nltk/distribute-0.6.21-py3.3.egg Traceback (most recent call last): File "./distribute_setup.py", line 143, in use_setuptools raise ImportError ImportError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 16, in File "/tmp/pip_build_root/nltk/setup.py", line 23, in distribute_setup.use_setuptools() File "./distribute_setup.py", line 145, in use_setuptools return _do_download(version, download_base, to_dir, download_delay) File "./distribute_setup.py", line 125, in _do_download _build_egg(egg, tarball, to_dir) File "./distribute_setup.py", line 116, in _build_egg raise IOError('Could not build the egg.') OSError: Could not build the egg. ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/nltk Storing complete log in /root/.pip/pip.log This error is also occurring for matplotlib buts thats the only other library I found it to fail on so far. pyyaml installs fine. The install works perfectly under virtualenv on my mac which is using python 2.7 but the server is using python 3.3. Any help is appreciated.

    Read the article

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

  • mysql thread count

    - by Ryan M.
    We have a web application that uses apache and mysql. Generally (according to Munin) our MySQL thread count sits between 2 and 4 at all times. The other day, our server almost came to a halt. HTTP requests were slow or wouldn't go through at all, SSH would work, but would take 30+ seconds to register keystrokes, etc.. So we pull up Munin and the only thing that's out of normal boundaries is the Mysql thread count. CPU usage was under 1%, load was under 1.0, plenty of available RAM. As mentioned before, the thread count floats around 2 to 4. At the time of our slow downs it had spiked to 14. So I start poking around the Internet and I see that in most cases, you'll start to see a higher thread count when you start running into slow queries. If I understand it correctly, the request comes in that takes a while to process, in the mean time other requests are coming in, so a new thread will be created to work on the request (yes?). But at the time of the slow down, we had 0 slow queries. My question is: What else can cause mysql to create additional threads. And would this sudden spike in threads possibly cause the server to slow down? To fix the issue, we restarted apache and everything went back to it's beautiful, normal self. Considering the the Server Vitals (CPU, RAM, Network, etc) were all ideal, and the thread count was the only thing out of place, this seems like the most logical thing to pursue as the possible cause. If it matters, we're on Mysql 5.1.40. Server is FreeBSD 7.2 and the server in question is inside a jail.

    Read the article

  • SharePoint 2010 Enterprise wiki - [New page] missing

    - by icelava
    I am trying to ramp up knowledge on SharePoint deployment and usage (never did before), due to a direction to use SharePoint 2010 as a repository platform (wiki format) for our customer's infrastructure documentation. In my test virtual server, a new site of Enterprise wiki template was setup. Went into Site Actions Manage Site Features to activate Wiki Page Home Page. The default sub-web then went from /Pages to /SitePages and looks like the default Team template. The odd thing is the Site Actions is missing the New Page option. My colleague does not understand why this is the case, as it ought to be there. The original /Pages sub-web does have the option. What conditions are in play that influences the appearance of that option? UPDATE Another phenomenon observed is in the Site Actions View All Site Content view, the wiki document libraries listed in the grid will have their hyperlink (e.g. "Site Pages") lead straight to the direct default page. It would not show its own table listing of pages under that document library, unlike the original Pages document library, which expectedly show up as a listing. I wonder if this hints to any problems.

    Read the article

  • heavy load on mysql

    - by payal
    i have dedicated server with very good configuation like 16 gb ram etc but i am facing heavy load from mysql however only one database is running and 5-10 pages are only running. However whm load is always less than one but when i click on whm load it shows 20% of cpu usage by mysql and after some time it starts saying can not connect to mysql . mysql server has gone away 1691 (Trace) (Kill) mysql 0 19.2 2.7 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log- error=/var/lib/mysql/server.xyz.com.err --pid-file=/var/lib/mysql/server.xyz.com.pid i have tested static pages they are coming blezing fast but all dynamic pages which are using mysql is coming damn slow it takes years to open.. ihave checked log error file it says nothing.i have increased maximum connnection also to 1000 but still same problem is there .if i disconnect that one databasejust by changing the name of database i can see withing half hour the load of server and mysql goes down to negliglble .i have tested everything and if there are some query which can cause heavy load to server can you please list which type of query can cause heavy load on server then also for 5-10 pages it will never cause that much heavy load. i have seen server with 500 websites but was working just fine.

    Read the article

  • heavy load on mysql

    - by payal
    i have dedicated server with very good configuation like 16 gb ram etc but i am facing heavy load from mysql i am running a music wesbite however only one database is running and 5-10 pages are only running.when i click on whm show processlist it shows only 2-3 processes However whm load is always less than one but when i click on whm load it shows 20% of cpu usage by mysql and after some time it starts saying can not connect to mysql . mysql server has gone away 1691 (Trace) (Kill) mysql 0 19.2 2.7 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log- error=/var/lib/mysql/server.xyz.com.err --pid-file=/var/lib/mysql/server.xyz.com.pid i have tested static pages they are coming blezing fast but all dynamic pages which are using mysql is coming damn slow it takes years to open.. my.conf file is [mysqld] key_buffer = 1536M max_allowed_packet = 1M max_connections = 250 max_user_connections = 15 wait_timeout=40 connect_timeout=10 table_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M server-id = 14 old-passwords = 1 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash [myisamchk] key_buffer = 256M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout ihave checked log error file it says nothing.i have increased maximum connnection also to 1000 but still same problem is there .if i disconnect that one databasejust by changing the name of database i can see withing half hour the load of server and mysql goes down to negliglble .i have tested everything and if there are some query which can cause heavy load to server can you please list which type of query can cause heavy load on server then also for 5-10 pages it will never cause that much heavy load. i have seen server with 500 websites but was working just fine.

    Read the article

  • Incorrect durations mp4 file created by ffmpeg (avconv)

    - by Ruslan Sharipov
    Example usage: avconv -i rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af -vcodec copy -an -sn -map 0 -f segment -segment_format mp4 -segment_time 60 -y %05d.mp4 avconv version 0.8.3-6:0.8.3-1+b1, Copyright (c) 2000-2012 the Libav developers built on Jun 15 2012 13:54:35 with gcc 4.7.0 HandShake: client signature does not match! Metadata: height 480.00 remote_addr: sdp_session {sdp_session,0, {sdp_o,"-","1289703354974145","1289703354974145",inet4, "10.1.12.99"}, "Media Presentation", {inet4,"0.0.0.0"}, {0,0}, [{"control","*"},{"range","npt=0.0 start 30400239.52 timeshift_duration 319250.58 timeshift_size 120000.00 width 640.00 [flv @ 0x1d36a40] Estimating duration from bitrate, this may be inaccurate Input #0, flv, from 'rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af': Duration: N/A, start: 0.000000, bitrate: N/A Stream #0.0: Video: h264 (Baseline), yuvj420p, 640x480 [PAR 1:1 DAR 4:3], 1k tbr, 1k tbn, 2k tbc Output #0, segment, to '%05d.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuvj420p, 640x480 [PAR 1:1 DAR 4:3], q=2-31, 1k tbn, 1k tbc Stream mapping: Stream #0:0 -> #0:0 (copy) Press ctrl-c to stop encoding ^Cframe= 9566 fps= 36 q=-1.0 Lsize= -0kB time=318.25 bitrate= -0.0kbits/s video:30348kB audio:0kB global headers:0kB muxing overhead -100.000071% Received signal 2: terminating. Result: serafim@yard:~/video2$ ls 00000.mp4 00001.mp4 00002.mp4 00003.mp4 00004.mp4 00005.mp4 Now try to play the files in the player, such as VLC. And that's what we get: the first fragment (00000.mp4) played well, no problems, but the second (00001.mp4 and beyond) starts the bug manifests itself, namely the file 00001.mp4 first 60 seconds black screen, but since 61 seconds starts playing the video. Attachments: https://dl.dropbox.com/u/760901/rtmp_and_mp4.zip How to get rid of the delay with black screen at the beginning of the segments? Maybe ffmpeg to pass parameters, or third-party software is able to correct the obtained segments mp4?

    Read the article

  • Procurve Primary VLAN

    - by fukawi2
    I'm trying to depreciate usage of VLAN 1 on my ProCurve switches; 1 is unused. I understand that VLAN 1 must exist, but I want to remove it from all ports, especially trunks between switches. The problem I have is that stacking does not seem to work without VLAN 1. I have changed the primary VLAN and management VLAN on all the switches: (config)# primary-vlan 42 (config)# management-vlan 42 (config)# no vlan 1 untagged 25 Port 25 is the link between the 2 switches I'm testing with; the stack master and a member switch; I only want tagged traffic between the switches, no untagged frames. show stacking on the master shows all members as "UP" but I can not telnet any of them: Telnet failed: Connection timed out. All switches have manually assigned (static) IP addresses on VLAN 42, and all exist in the same /25 subnet, as does my desktop. I can telnet the switches directly from my desktop to the individual switch IP addresses, just not from the master switch. Do I need to reboot the switches to have the primary-vlan change take effect? Or is there something else I'm missing?

    Read the article

  • ESXi 5 Guests will not boot

    - by Adrian
    I have a problem with Guests not booting under VMWare ESXi 5.0 on my IBM x3550M3 server. Note: Investigation eventually determined that problem was with the VMware client on a Lenovo Edge laptop, the only Windows box available in a Linux IT shop. vSphere Client v4 and v5 duplicated behavior on the Lenovo Edge. As indicated in the comment to the accepted answer, replacing the workstation with one using different video was the "fix" for this particular issue. The ESXi host boots just fine. The Client connects just fine. Guests can be configured but do not successfully boot. The initial guest memory consumption jumps up to 560MB and drops down to 40MB after a few seconds. Initial CPU usage is 1 full CPU (3000Ghz per the chart) and immediately drops downm to 29Mhz. Guests do not display any output in the Console tab but show a state of 'Powered On'. No errors in the Events tab. Switching Guest from BIOS to EFI makes no difference. VMs are listed as Version 7 and the behavior is duplicated across all availabled Guest OS flavors. Problem also duplicated when server is booted up in Legacy Only mode. Logs do not contain anything particularly suspicious. Edit: No firewalls, routers, or VLANs in between the client and server. Edit 2: We have tried to Boot Guest into BIOS screen at Next Boot checkbox in the Guest Setting. Was not successful. Edit 3: 500GB datastore with 1 40GB VM on it. Plenty of space. Edit 4: Guests copied from my old ESXi 4 server DO NOT boot on the ESXi 5 system. Initially it complains about too little Video RAM being configured for the default 2500x1600, but it still doesn't work properly even after I bump the Video RAM settings or switch it to Auto-Detect.

    Read the article

  • Which AMI to to use for Java/Tomcat/MySQL in Amazon EC2?

    - by Justin
    I originally posted this on stackoverflow.com and it was suggested serverfault.com might be a better place to ask this question. So here goes: I'm trying to determine which Amazon Machine Image (AMI) to use as my Virtual Server in Amazon's EC2. For now, I'll need to choose an AMI that complies with the AWS Free Usage Tier. I want to deploy a Java app that I've been developing using Eclipse on Windows XP, Tomcat 7 and MySQL 5.5. I'm aware that I can choose the Basic 32-bit Amazon Linux AMI. Then I'd manually install Tomcat and MySQL (does MySQL get installed on the image or separately on an Elastic Block Store (EBS)?). Here's the rub, I'm a bit of a Linux noob. I can start Tomcat and tail the logs and such on Linux but I'm not familiar with the install process for Tomcat and MySQL on Linux and commands like sudo and chmod. I'm happy to get more hands on with Linux but I'm short on time right now. Are there AMI's that already have Tomcat and MySQL bundled? The Request Instance Wizard shows 805 Community AMI's that are Free Tier Eligible. 51 of the Free Tier Eligible AMI's have "Tomcat" in their name. I'm willing to consider using Elastic Beanstalk but my research thus far hasn't found any discussion of using MySQL with Beanstalk. The discussions all seem to use Amazon's SimpleDB. Any advice is greatly appreciated.

    Read the article

  • Isolating a computer in the network

    - by Karma Soone
    I've got a small network and want to isolate one of the computers from the whole network. My Network: <----> Trusted PC 1 ADSL Router --> Netgear dg834g <----> Trusted PC 2 <----> Untrusted PC I want to isolate this untrusted PC in the network. That means the network should be secure against : * ARP Poisoning * Sniffing * Untrusted PC should not see / reach any other computers within the network but can go out the internet. Static DHCP and switch usage solves the problem of sniffing/ARP poisoning. I can enable IPSec between computers but the real problem is sniffing the traffic between the router and one of the trusted computers. Against getting a new IP address (second IP address from the same computer) I need a firewall with port security (I think) or I don't think my ADSL router supports that. To summarise I'm looking for a hardware firewall/router which can isolate one port from the rest of the network. Could you recommend such a hardware or can I easily accomplish that with my current network?

    Read the article

  • Startup script for Red5 on Ubuntu 9.04

    - by user49249
    I am creating startup script for Red5 on Ubuntu. Red5 is installed in /opt/red5 Following is a working script on a CentOS Box on which Red5 is running [code] ==========Start init script ========== #!/bin/sh PROG=red5 RED5_HOME=/opt/red5/dist DAEMON=$RED5_HOME/$PROG.sh PIDFILE=/var/run/$PROG.pid # Source function library . /etc/rc.d/init.d/functions [ -r /etc/sysconfig/red5 ] && . /etc/sysconfig/red5 RETVAL=0 case "$1" in start) echo -n $"Starting $PROG: " cd $RED5_HOME $DAEMON >/dev/null 2>/dev/null & RETVAL=$? if [ $RETVAL -eq 0 ]; then echo $! > $PIDFILE touch /var/lock/subsys/$PROG fi [ $RETVAL -eq 0 ] && success $"$PROG startup" || failure $"$PROG startup" echo ;; stop) echo -n $"Shutting down $PROG: " killproc -p $PIDFILE RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$PROG ;; restart) $0 stop $0 start ;; status) status $PROG -p $PIDFILE RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|restart|status}" RETVAL=1 esac exit $RETVAL [/code] What do I need to replace for Ubuntu in the above script. My Red5 is in /opt/red5/ and to start it manually I always do /opt/red5/dist/red5.sh from Ubuntu As I did not find rc.d/functions on Ubuntu on my laptop also /etc/init.d/functions I did not existed. I would like to be able to use them with service as Red hat distributions do. I checked /lib/lsb/init-functions.

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >