Search Results

Search found 8687 results on 348 pages for 'per ersson'.

Page 76/348 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Ranking For the Right SEO Terms

    Obviously, some things get searched for in Google much more than others. We can see that "Knitting" gets millions of searches per month on Google, where as a more obscure term like "Learn how to Knit Mittens" gets just a few hundred.

    Read the article

  • Part Played by SEO in Success of a Business

    With the advent of the internet, a number of websites have been established. What is a website? As per its definition, website is nothing but collection of web pages, images, videos with a common domain name or IP address in an internet protocol based network.

    Read the article

  • Creating a Section 508 Accessible Site

    If you are a web developer by profession, you should be knowledgeable with Section 508 Standards. Such standards aid website developers in making sites accessible for all; primarily for disabled users, visually or auditory impaired. In fact, as per federal regulations, government websites should comply with the guidelines as outlined by Section 508.

    Read the article

  • SEO - How to Make it Work For You

    The internet is constantly expanding, and as such it's necessary for internet entrepreneurs to think about how they can best harness its power and make money from all of the different outlets that are out there. This could involve such techniques as affiliate marketing, pay per click advertising, and SEO article writing.

    Read the article

  • NTUSER.DAT and UsrClass.dat files building up by the thousands, why and can I delete?

    - by Anthony
    I've noticed that my web server, 2008 Xen VM, gradually loosing free space - more than I would of though from normal use and decided to investigate. There are two problem areas: *C:\Users\Administrator\ (6,755.0 MB)* with files: NTUSER.DAT{randomness}.TMContainer'0000 randomness'.regtrans-ms NTUSER.DAT{randomness}.TM.blf AND C:\Users\Administrator\AppData\Local\Microsoft\Windows\ (6,743.8 MB) with files UsrClass.dat{randomness}.TMContainer'0000 randomness'.regtrans-ms UsrClass.dat{randomness}.TM.blf From what I understand these are in-time backups of registry changes. If that is the case I cannot possibly understand why there would be 10000+ changes. (That's how many files there are per folder location, over 20,000 per folder in total.) The files are using almost 15GB of space and I want rid of them, I'm just wondering can I remove them. However, I need to understand why they are being created so I can avoid this in the future. Any ideas why there would be so many? Is there a way I can check to see what is making the modifications? Are they created with login attempts? Are they created in relation to every day Web Server use? etc. and so on

    Read the article

  • Setting up PerformancePoint Services on Sharepoint 2010: connection errors

    - by Rik
    I have tried to setup PerformancePoint Services on SharePoint 2010, but every time I try to use the dashboard designer, I get this error: “An error has occurred attempting to contact the specified SharePoint site” I have tried these steps but it hasn't helped. Any ideas? The event log gives the following information: WebHost failed to process a request. Sender Information: System.ServiceModel.ServiceHostingEnvironment+HostingManager/24724999 Exception: System.ServiceModel.ServiceActivationException: The service '/_vti_bin/client.svc' cannot be activated due to an exception during compilation. The exception message is: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item. --- System.ArgumentException: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item at System.ServiceModel.UriSchemeKeyedCollection.InsertItem(Int32 index, Uri item) at System.Collections.Generic.SynchronizedCollection`1.Add(T item) at System.ServiceModel.UriSchemeKeyedCollection..ctor(Uri[] addresses) at System.ServiceModel.ServiceHost..ctor(Type serviceType, Uri[] baseAddresses) at System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(Type serviceType, Uri[] baseAddresses) at System.ServiceModel.Activation.ServiceHostFactory.CreateServiceHost(String constructorString, Uri[] baseAddresses) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.CreateService(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.ActivateService(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) --- End of inner exception stack trace --- at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath) Process Name: w3wp Process ID: 2576

    Read the article

  • Problems setting NTP sever with w32tm for a DC that is a Hyper-V guest

    - by R.Tonheim
    Hello ! I have tried to sett my DC to get its time from several NTP severs. I follow this answer (http://serverfault.com/questions/24298/w32time-sync-problems-for-hyper-v-guests-w32time-event-ids-38-24-29-35/24299#24299) to do it. First I disable Time Synchronization in the Hyper-V Integration Services for each guest. Then restart the Windows Time serviceon the guest. I had before this used this command: w32tm /config /manualpeerlist:"ntp.uio.no;timekeeper. uio.no;nissen.uio.no;0.no.pool.ntp.org;1.no.pool.ntp.org;2.no.pool.ntp.org" /syn cfromflags:manual /reliable:yes /update And the cmd sad: The command completed successfully. But the time was still 10 min wrong... I run w32tm again after restarted the DC without it having any effect. The w32tm /query /status still say: "Source: Local CMOS Clock" FROM MY CMD: Microsoft Windows [Version 6.0.6002] Copyright (c) 2006 Microsoft Corporation. All rights reserved. C:\Users\Administrator.MHGw32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 05.09.2009 20:06:21 Source: Local CMOS Clock Poll Interval: 6 (64s) C:\Users\Administrator.MHGw32tm /config /manualpeerlist:"ntp.uio.no;timekeeper. uio.no;nissen.uio.no;0.no.pool.ntp.org;1.no.pool.ntp.org;2.no.pool.ntp.org" /syn cfromflags:manual /reliable:yes /update The command completed successfully. C:\Users\Administrator.MHGw32tm /query /status Leap Indicator: 0(no warning) Stratum: 1 (primary reference - syncd by radio clock) Precision: -6 (15.625ms per tick) Root Delay: 0.0000000s Root Dispersion: 10.0000000s ReferenceId: 0x4C4F434C (source name: "LOCL") Last Successful Sync Time: 05.09.2009 20:06:21 Source: Local CMOS Clock Poll Interval: 6 (64s) C:\Users\Administrator.MHG

    Read the article

  • The boot selection failed because a required device is inaccessible 0xc000000e

    - by bbodenmiller
    A family member of mine recently went on vacation and turned off their computer, something they normally do not do, upon returning home it would not turn on and now returns the error message below. Generally friends and family come to me for help with computers and I have no problem, however this time I am a bit stumped. Any suggestions would be greatly appreciated. As you can see the error message is: Status: 0xc000000e Info: The best selection failed because a required device is inaccessible. Before going to this error message it briefly flashes the Windows loading screen. I have been able to confirm through the Windows RE Command Line and the dir command that the C: drive is accessible and likely is just suffering a bootup issue. I have tried: Launching the repair process discussed in the error message three times however each time it requires a restart and then returns to the same error message. Changing the boot order to be hard drive first Getting into safe mode; F8 just results in the same error message before I can get to the menu to select safe mode I have checked to make sure the BCD (bcdedit, Boot Configuration Data) is still intact as per https://www.symantec.com/business/support/index?page=content&id=TECH160475 I plan to try (but would like additional comments on): sfc /scannow; requires a restart and thus will likely result in the error message again A memory scan Bootrec as per http://support.microsoft.com/kb/927392#method1 Swapping IDE cables/ports Resetting the BIOS I noticed others with similar issues around the web are dual-booting however this machine is not setup in a dual-boot environment. Additionally at one point this error message supposedly showed up before I started working on the computer: The instruction at 0xfbe2584d referenced memory at 0x00000008. The memory could not be read. As previously stated any additional suggestions or words of advice would be greatly apprecaited.

    Read the article

  • Stange stream of HTTP GET requests in apache logs, from amazon ec2 instances

    - by Alexandre Boeglin
    I just had a look at my apache logs, and I see a lot of very similar requests: GET / HTTP/1.1 User-Agent: curl/7.24.0 (i386-redhat-linux-gnu) libcurl/7.24.0 \ NSS/3.13.5.0 zlib/1.2.5 libidn/1.18 libssh2/1.2.2 Host: [my_domain].org Accept: */* there's a steady stream of those, about 2 or 3 per minute; they all request the same domain and resource (there are slight variations in user agent version numbers); they come form a lot of different IPv4 and IPv6 addresses, in blocs that belong to amazon ec2 (in Singapore, Japan, Ireland and the USA). I tried to look for an explanation online, or even just similar stories, but couldn't find any. Has anyone got a clue as to what this is? It doesn't look malicious per say, but it's just annoying me, and I couldn't find any more information about it. I first suspected it could be a bot checking if my server is still up, but: I don't remember subscribing to such a service; why would it need to check my site twice every minute; why doesn't it use a clearly identifying fqdn. Or, should I send this question to amazon, via their abuse contact? Thanks!

    Read the article

  • How to set shmall, shmmax, shmni, etc ... in general and for postgresql

    - by jpic
    I've used the documentation from PostgreSQL to set it for example this config: >>> cat /proc/meminfo MemTotal: 16345480 kB MemFree: 1770128 kB Buffers: 382184 kB Cached: 10432632 kB SwapCached: 0 kB Active: 9228324 kB Inactive: 4621264 kB Active(anon): 7019996 kB Inactive(anon): 548528 kB Active(file): 2208328 kB Inactive(file): 4072736 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 3432 kB Writeback: 0 kB AnonPages: 3034588 kB Mapped: 4243720 kB Shmem: 4533752 kB Slab: 481728 kB SReclaimable: 440712 kB SUnreclaim: 41016 kB KernelStack: 1776 kB PageTables: 39208 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 8172740 kB Committed_AS: 14935216 kB VmallocTotal: 34359738367 kB VmallocUsed: 399340 kB VmallocChunk: 34359334908 kB HardwareCorrupted: 0 kB AnonHugePages: 456704 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 12288 kB DirectMap2M: 16680960 kB >>> ipcs -l ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 4316816 max total shared memory (kbytes) = 4316816 min seg size (bytes) = 1 ------ Semaphore Limits -------- max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 32 semaphore max value = 32767 ------ Messages Limits -------- max queues system wide = 31918 max size of message (bytes) = 8192 default max size of queue (bytes) = 16384 sysctl.conf extract: kernel.shmall = 1079204 kernel.shmmax = 4420419584 postgresql.conf non defaults: max_connections = 60 # (change requires restart) shared_buffers = 4GB # min 128kB work_mem = 4MB # min 64kB wal_sync_method = open_sync # the default is the first option checkpoint_segments = 16 # in logfile segments, min 1, 16MB each checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 effective_cache_size = 6GB Is this appropriate ? If not (or not necessarily), in which case would it be appropriate ? We did note nice performance improvements with this config, how would you improve it ? How should kernel memory management parameters be set ? Can anybody explain how to really set them from the ground up ?

    Read the article

  • Spreadsheet RDBMS

    - by John Nilsson
    I'm looking for a software (or set of software) that will let me combine spreadsheet and database workflows. Data entry in spreadsheet to enable simple entry from clipboard, analysis based on joins, unions and aggregates and pivot/data pilot summaries. So far I've only found either spreadsheets OR db applications but no good combination. OO base with calc for tables doesn't support aggregates f.ex. Google Spreadsheet + Visualizaion API doesn't support unions or joins, zoho db doesn't let me paste from clipboard. Any hints on software that could be used? Basically I'm trying to do some analysis of my personal bank transactions. Problem 1, ETL. The data has to be moved from my bank to a database. My current solution is to manually copy and paste the data into one spread sheet per account from my internet bank. Pains: Not very scriptable. Lots of scrolling to reach the point to paste. Have to apply sorting and formatting to the pasted data each time. Problem 2, analysis. I then want to aggregate the different accounts in one sweep to track transfers per type of transfer over all accounts. The actual aggregation is still unsolved because I can't find a UNION equivalent in the spreadsheets I've tried.

    Read the article

  • ISA Server 2006 SSL Certificate Dilemma

    - by JohnyD
    I'm making so great headway in offering our services over https with help from a Go Daddy certificate, later to be upgraded to Thawte SSL123 certs. But, I've just run into one whopper of a problem. Here's my setup: I run an ISA 2006 firewall. Our web services are distributed over 2 servers. One is Windows 2000 (www.domain.com) and the other is Windows 2003 (services.domain.com). So, I'll need to purchase 2 certs for both www and services, import them into IIS6 on their respective machines, then export them with the primary key (making sure to Include all certificates in the certification path if possible... that had me stumped for a while), and then to finally import them into ISA's local computer Personal store. The problem I've just run into is that I have separate firewall rules for services.domain.com and www.domain.com... because requests need to be forwarded to different web servers. Each of these firewall rules use the same httplistener. I have just found out that you can only use 1 certificate per httplistener. To make matters worse you can only have a single httplistener per ip / port. Is this correct? I can only use a single certificate for a single ip address? This would seem to be a severe limitation. Am I wrong? If I'm not then I've got a whole lot more work ahead of me as I'll have to set up extra ip's, add them to the firewall's network interface, create new listeners using that ip, etc... Can someone please confirm that I'm doing this correctly / incorrectly? Once I got my head wrapped around it all it seemed easy... then this. Thanks in advance.

    Read the article

  • Server configuration advice for new site that could get lots of traffic within 6m

    - by alchemical
    We're setting up a new web2.0 type site with elements of e-commerce. Budget is kind of tight. Due to the nature of the site and promotions, etc., we expect traffic could ramp up fairly quickly. Looking for advice for a good configuration to start with, we' looking to co-lo with CalPop in downtown LA. We've looked at Dell, ABMX.com, and got a quote from CalPop (they make their own servers as they also do managed hosting). Price range has been anywhere from about $1200-$3300 per server. We're thinking to start with a web server and db server, both with mirrored drives. It would be nice to stay under about 2k per server if possible. Min configuration for each would probably be a quad-core with 8GB Ram. Thinking to run Windows Server 2008 R2 (Web Edition?) and SQL Server 2008. Looking for advice on the best server configurations and/or brands that fit the budget, yet will allow us to smoothly scale as traffic increases. Reliability is also pretty important. Also wondering if a switch/router is necessary or useful to connect the two servers.

    Read the article

  • mysql_tzinfo_to_sql missing on my system

    - by Sk1ppeR
    I ran into problem with timezones within MySQL. Long story short, my application is worldwide, and each database has it's own timezone set within the application (not the server) in the way of "Europe/Berlin", "Europe/Vienna", "America/Sao Paulo". Obviously this is unacceptable for MySQL at first per connection. I read that it handles data better if you use UTC offsets. Basically my goal is to log a field's alteration in another table using a trigger. For that I use UNIX_TIMESTAMP within the trigger. Although UNIX_TIMESTAMP() follows the global timezone for the server which obviously bothers me a lot :| So I went to search for a "per connection" solution to use inside the trigger and well I found that mysql_tzinfo_to_sql can actually import zone info (UTC offsets) from my linux's zoneinfo files. Although to my amuse, when I ran the commant I got the following: bash: mysql_tzinfo_to_sql: command not found So I'm looking for a solution to fix that. I don't want to "map" the timezone names into UTC offset just so I could use in the trigger. Is there an alternative tool? Or at least sources for this one in particular only? What kind of queries does this tool generates so I could do it manually then if there is no alternative tool. Thanks in advance on any help on the issue! P.S: The OS is Debian GNU/Linux 6.0 and the MySQL server is the one from aptitude with performance tweaks with my.cnf

    Read the article

  • How can I calculate power consumption of my PC in Watt?

    - by Jitendra vyas
    How can I calculate power consumption of my PC in Watt, to prove my House owner ( I live on rent) , my PC doesn't consume much power? He blames me for Huge power bills even he too use Fridge, A.C. etc and his son watch the TV all the time. We both share one Power meter so for bill we pay 50%-50% but He is saying I use PC all the time even night i keep on for downloading. I just want to calculate power consumption of my PC then will calculate monthly expense of unit as per my City's per unit price for power. I've Windows: Microsoft Windows XP Professional 5.1.2600 Service Pack 3 Memory (RAM): 960 MB CPU Info: AMD Sempron(tm) Processor 2500+ CPU Speed: 1399.0 MHz Sound card: Vinyl AC'97 Audio (WAVE) Display Adapters: VIA/S3G UniChrome Pro IGP | NetMeeting driver | RDPDD Chained DD Monitors: 1 - 17inch LCD - LG Screen Resolution: 1280 X 768 - 32 bit Network: Network Present Network Adapters: Bluetooth Device (Personal Area Network) #2 | WAN (PPP/SLIP) Interface CD / DVD Drives: I: ELBY CLONEDRIVE COM Ports: COM1 | COM2 | COM7 | COM8 | COM9 | COM10 LPT Ports: LPT1 Mouse: 3 Button Wheel Mouse Present Hard Disks: C: 29.3GB | D: 29.3GB | E: 97.7GB | F: 97.7GB | G: 211.9GB USB Controllers: 5 host controllers. Firewire (1394): 1 host controllers. Manufacturer: Phoenix Technologies, LTD Product Make: MS-7142 AC Power Status: OnLine BIOS Info: AT/AT COMPATIBLE | 01/18/06 | VIAK8M - 42302e31 Motherboard: MICRO-STAR INTERNATIONAL CO., LTD MS-7142 Modem: ZTE USB Modem FFFE CDMA #2

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6x2GHz) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : From the cloud's IO chart, we're averaging 100mb/s read. > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. moved from superuser

    Read the article

  • Installing SATA dvd burner on machine with no spare SATA ports/connectors

    - by Faheem Mitha
    Greetings. I have the following motherboard Tyan Thunder K8WE S2895A2NRF Motherboard - extended ATX - nForce Pro 2200/2050 - Socket 940 - UDMA133, Serial ATA-300 (RAID) - 2 x Gigabit Ethernet - FireWire - 6-1 channel audio This is part of a computer that was assembled in the winter of 2006/2007. The user manual says the following with regard to SATA Integrated SATAII Generation 1 Controllers (from NForce Professional 2200) Two integrated dual port SATA II controllers Four SATA connectors support up to four drives 3 Gb/s per direction per channel NvRAID v2.0 support Supports RAID 0, 1, 0+1 and JBOD. I just purchased a SATA DVD burner. Here is the page for the product http://www.amazon.ca/gp/product/B002QGDWLK/ The problem I am facing is that I already have 4 SATA drives installed. I don't want to remove any of them. However, I want the DVD burner above installed as well. The person I am consulting with here (Bombay, India) tells me that my four available SATA ports are filled, and that my only option is to install a SATA card into the one free PCI slot on the motherboard. However, he says that with this setup I will not be able to boot from the DVD drive. Are these statements correct, and what are my other options if any? Even it the statements in the last para are true, I suppose I could use one of the motherboard connectors/ports there are currently being used with the hard drives with the DVD drive, and use the "add-on" connector with one of the hard drives. Not all the 4 hard drives need to be bootable. BTW, despite having read through http://en.wikipedia.org/wiki/Serial_ATA#Cables.2C_connectors.2C_and_ports I am fuzzy on the differences between connectors, cables and ports. Thanks in advance.

    Read the article

  • mcelog doesn't fails to start PUIAS 6.4 amd hardware

    - by Predrag Punosevac
    Folks, I am a total Linux n00b. I am trying to deploy mcelog on one of my computing nodes running PUIAS 6.4 (i86_64) [root@lov3 edac]# uname -a Linux lov3.mylab.org 2.6.32-358.18.1.el6.x86_64 #1 SMP Tue Aug 27 22:40:32 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux a free clone of Red Hat 6.4 on AMD hardware [root@lov3 mcelog]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 4 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 21 Model: 2 Stepping: 0 CPU MHz: 1400.000 BogoMIPS: 4999.30 Virtualization: AMD-V L1d cache: 16K L1i cache: 64K L2 cache: 2048K L3 cache: 6144K NUMA node0 CPU(s): 0-7 NUMA node1 CPU(s): 8-15 NUMA node2 CPU(s): 16-23 NUMA node3 CPU(s): 24-31 NUMA node4 CPU(s): 32-39 NUMA node5 CPU(s): 40-47 NUMA node6 CPU(s): 48-55 NUMA node7 CPU(s): 56-63 My mcelog.conf file is more or less default apart of the fact that I would like to run mcelog as a daemon and to log errors. When I start mcelog [root@lov3 mcelog]# mcelog --config-file mcelog.conf AMD Processor family 21: Please load edac_mce_amd module. However the module is present [root@lov3 mcelog]# locate edac_mce_amd.ko /lib/modules/2.6.32-358.18.1.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko /lib/modules/2.6.32-358.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko and loaded [root@lov3 edac]# lsmod | grep mce edac_mce_amd 14705 1 amd64_edac_mod Is there anything that I can do to get mcelog working? The only reference I found is this thread http://lists.centos.org/pipermail/centos/2012-November/130226.html

    Read the article

  • Computer does not switch on after power outage

    - by cristian
    VOLTAGE DROP OFF FOR PC does not restart The other day my pc was turned off due to power outage. Since that time the computer would not turn on again, no sign of life, it seems dead. I did several tests, changed the power outlet and disconnect the wires ... also I have reseated the cards ... but the result is that nothing changes. What can I do? Could there may be damage to the hardware due to the power outage? Note: the voltage drop is not due to a lightning storm and so is not due to damaged components (burnt card etc ...) Original Text: l'altro giorno il pc mi si è spento improvvisamente per calo tensione.... da quel momento non si è piu' riacceso...nessun segnale di vita...sembra proprio morto. Ho fatto diverse prove, cambio presa di alimentazione, scollegare i fili...insomma ho "mischiato le carte"...ma il risultato è che non cambia nulla. Cosa posso fare? cosa puo' essere successo? Possono esserci danni hardware per il calo di tensione? NB: il calo di tensione non e' dovuto ad una saetta e quindi escluderei danni causa temporale (bruciature scheda ecc...) Grazie mille

    Read the article

  • Limiting interface bandwidth with tc under Linux

    - by Matt
    I have a linux router which has a 10GBe interface on the outside and bonded Gigabit ethernet interfaces on the inside. We have currently budget for 2GBit/s. If we exceed that rate by more than 5% average for a month then we'll be charged for the whole 10Gbit/s capacity. Quite a step up in dollar terms. So, I want to limit this to 2GBit/s on 10GBe interface. TBF filter might be ideal, but this comment is of concern. On all platforms except for Alpha, it is able to shape up to 1mbit/s of normal traffic with ideal minimal burstiness, sending out data exactly at the configured rates. Should I be using TBF or some other filter to apply this rate to the interface and how would I do it. I don't understand the example given here: Traffic Control HOWTO In particular "Example 9. Creating a 256kbit/s TBF" tc qdisc add dev eth0 handle 1:0 root dsmark indices 1 default_index 0 tc qdisc add dev eth0 handle 2:0 parent 1:0 tbf burst 20480 limit 20480 mtu 1514 rate 32000bps How is the 256K bit/s rate calculated? In this example, 32000bps = 32k bytes per second. Since tc uses bps = bytes per second. I guess burst and limit come into play but how would you go about choosing sensible numbers to reach the desired rate? This is not a mistake. I tested this and it gave a rate close to 256K but not exactly that.

    Read the article

  • Mod_Perl configuration for multiple domains

    - by daliaessam
    Reading the Mod_Perl module documentation, can we configure it on per domain basis, what I mean can we configure it to run on every domain or specific domain only. What I see in the docs is: Registry Scripts To enable registry scripts add to httpd.conf: Alias /perl/ /home/httpd/2.0/perl/ <Location /perl/> SetHandler perl-script PerlResponseHandler ModPerl::Registry PerlOptions +ParseHeaders Options +ExecCGI </Location> and now assuming that we have the following script: #!/usr/bin/perl print "Content-type: text/plain\n\n"; print "mod_perl 2.0 rocks!\n"; saved in /home/httpd/httpd-2.0/perl/rock.pl. Make the script executable and readable by everybody: % chmod a+rx /home/httpd/httpd-2.0/perl/rock.pl Of course the path to the script should be readable by the server too. In the real world you probably want to have a tighter permissions, but for the purpose of testing, that things are working, this is just fine. From what I understand above, we can run Perl scripts only from one specific folder that we put the directive above. So the question again, can we make this directive per domain for all domains or for specific number of domains?

    Read the article

  • Is it possible to rate limit based on host headers? i.e. not just on ip address

    - by Blankman
    I have a web service endpoint that I am building where people will post an xml file to, and it will really get pounded with over 1K requests per second. Now they are sending in these xml files via http post, but a good majority of them will be rate limited. The problem is, the rate limiting will be done by the web application by looking up the source_id in the xml, and if it is over x requests per minute, it will not be processed further. I was wondering if I could do rate limit checking earlier in the processing somehow and thus save the 50K file going threw the pipeline to my web servers and eating up resources. Could a load balancer make a call out to verify rate usage somehow? If this is possible, I could maybe put the source_id in a host header so even the XML file doesn't have to be parsed and loaded into memory. Is it possible to just look at host headers and not load up the entire 50K xml file into memory? I really appreciate your insights as this takes more knowledge of the entire tcp/ip stack etc.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >