Search Results

Search found 4258 results on 171 pages for 'maximum degree of paralle'.

Page 85/171 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • MSSQL Auditing Recomendations

    - by Josh Anderson
    As an aspiring DBA, I have recently been asssigned the task of implementing the tracking of all data changes in the database for a peice of software we are developing. After playing with microsoft's change data capture methods, Im looking into some other solutions. We are planing to distribute our product as a hosted solution and unlimited installations would be desired for maximum scalability. Ive looked at IBM's Guardium as well as DB Audit by SoftTree. Im curious if anyone has any solutions they may have used in the past or possibly any suggestions or methods to achieve complete, and of course cost effective, auditing of data changes.

    Read the article

  • Very low throughput on 10GbE network

    - by aix
    I have two Linux machines, each equipped with a Solarflare SFN5122F 10GbE NIC. The two NICs are connected together with an SFP+ Direct Attach cable. I am using netperf to measure TCP throughput between the two machines. On one box, I run: netserver and on the other: netperf -t TCP_STREAM -H 192.168.x.x -- -m 32768 I get: MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.x.x (192.168.x.x) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 32768 10.02 1321.34 The measured throughput is 1.3Gb/s. This is 7.5x below the theoretical maximum, and only 30% faster than 1GbE. What steps can I take to troubleshoot this?

    Read the article

  • need for tcp fine-tuning on heavily used proxy server

    - by Vijay Gharge
    Hi all, I am using squid like Internet proxy server on RHEL 4 update 6 & 8 with quite heavy load i.e. 8k established connections during peak hour. Without depending much on application provider's expertise I want to achieve maximum o/p from linux. W.r.t. that I have certain questions as following: How to find out if there is scope for further tcp fine-tuning (without exhausting available resources) as the benchmark values given by vendor looks poor! Is there any parameter value that is available from OS / network stack that will show me the results. If at all there is scope, how shall I identify & configure OS tcp stack parameters i.e. using sysctl or any specific parameter Post tuning how shall I clearly measure performance enhancement / degradation ?

    Read the article

  • Using LDAP Attributes to improve performance for large directories

    - by Vineet Bhatia
    We have a LDAP directory with more than 50,000 users in it. LDAP Vendor suggests maximum limit of 40,000 users per LDAP group. We have number of inactive users and those are being purged but what if we don't get below the 40,000 users? Would switching to using multivalued attribute at user record level instead of using LDAP groups yield better performance during authentication, adding new users, etc? I know most server software (portal, application servers, etc) use LDAP groups. But, we have a standardized web service interface for access control instead of relying on server software to map LDAP groups to security roles. Each application uses this common "access control web service". Security roles are used within application to build fine-grained ACL used within each enterprise application.

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • 1000 HZ linux kernel necessary if I have tickless and high resolution timer?

    - by Bob
    I am trying to improve performance on my server. I have a few processes that need low jitter (less than 10ms variance). I have a load average of 4 maximum on an i7-920 (4 physical cores, 8 with HT). There are about 10 processes ranging from 40% to 90% of a core user mode. System usage is 3% total. Total CPU usage is 80% max. Will setting the kernel from 100hz to 1000hz improve the jitter if tickless and high resolution timers are already set? This page seems to indicate it still does something. https://lkml.org/lkml/2009/4/28/401 How about changing from voluntary (PREEMPT_VOLUNTARY) to preemptible (PREEMPT)?

    Read the article

  • Notebook fan spinning at max after trying Linux

    - by Igor Kulman
    I have a Thinkpad T420 (4178-BSG) I use with Windows. The fan (cpu) was always very quiet and I was completely satisfied with it. A few days ago I booted Backtrack Linux from a flashdrive and the fan started to spin at maximum and was very loud. The problem is that this state persists. When I start the Thinkpad and boot Windows as usual the fan start spining at max and never stops. It drives me mad. It looks like somehow the Linux change some settings and I have to suffer. I tried reseting BIOS, updating BIOS, nothing helpes. I even removed the keyboard, looked at the fan but there is no dust.

    Read the article

  • Most efficient Way to setup a game server

    - by alex bowers
    I'm running a PHP based game which has over 45 Million members predicted for end of this year (2011) Currently we are on 7.5 Million, this game is being ran on facebook and I am in desperate need to help get this game server as efficient and as powerful as possible. it is a dedicated server with Processor Manufacturer Intel Model i7 920 Frequency 4x 2x 2.66 GHz NIC GigaEthernet RAM 12 GB Hard disk 4 x 1 TB specs. It has apache installed, cPanel, phpMyAdmin, several apache mods and MySQL. The game also runs 47 mysql calls per second per user. Is there any alternatives to the above which could be faster, more efficient etc? I dont mind having to recode the game to fit to it, as long as it maximises our upper limit of members on the game. Thanks Also, is there a way to tell what our maximum limit to players, database calls etc is? Thank you again, hope you guys can help :)

    Read the article

  • How do I use long names to refer to Group Managed Service Accounts (gMSA)?

    - by Jason Stangroome
    Commonly domain user accounts are used as service accounts. With domain user accounts, the username can easily be as long as 64 characters as long as the User Principal Name (UPN) is used to refer to the account, eg [email protected]. If you still use the legacy pre-Windows 2000 names (SAM) you have to truncate it to ~20 characters, eg mydomain\truncname. When using the New-ADServiceAccount PowerShell cmdlet to create a new Group Managed Service Account (gMSA) and a name longer than 15 characters is specified, an error is returned. To specify a longer name, the SAM name must be specified separately, eg: New-ADServiceAccount -Name longname -SamAccountName truncname ... To configure a service to run as the new gMSA, I can use the legacy username format mydomain\truncname$ but using usernames with a maximum of 15 characters in 2013 is a smell. How do I refer to a gMSA using the UPN-style format instead? I tried the longname$@domainfqdn approach but that didn't work. It also seems that the gMSA object in AD doesn't have a userPrincipalName attribute value specified.

    Read the article

  • MacBook Pro shutting down instantly after attempt to update firmware

    - by Luke Dennis
    I tried to apply a firmware update on a MacBook Pro 2.4ghz (2008), and after rebooting, the fans kicked up to maximum, the screen lit up, and then it immediately died. Now when I try to boot, it does the same thing: the fans crank to max, the screen lights up, and then it dies after about 2 seconds. Resetting the power manager does nothing. It doesn't stay up long enough to choose another boot drive or boot from CD. I have no idea what else to try. Help?

    Read the article

  • when i set my FSB to 400mhz in bios it actually stays at 333mhz

    - by Mantis
    My front side bus (FSB) is rated for a maximum of 400mhz (Rated FSB 1600MHz). In fact, it used to run at 400mhz until recently. I'm trying to overclock my E8400 to 3.6GHz. I have done that in the past by having the FSB at 400 with a multiplier of 9. Now, when I set the FSB to 400, the system boots as normal, but the FSB stubbornly is stuck at 333 (Rated FSB 1333). The CMOS is set to 400, I've triple-checked it. It's just the FSB isn't listening. Is my FSB damaged? Gigabyte mobo

    Read the article

  • Does the advanced format tool bundled by manufacturers actually do anything which mkntfs doesn't?

    - by neurolysis
    I recently bought a new drive (specifically, a 2TB Samsung Spinpoint) that says on the label that it supports advanced format, and that I should download the tool from their site. Unless I'm missing something, mkntfs has always had its maximum sector size at 4096b: -s, --sector-size BYTES Specify the size of sectors in bytes. Valid sector size values are 256, 512, 1024, 2048 and 4096 bytes per sector. If omitted, mkntfs attempts to determine the sector-size automatically and if that fails a default of 512 bytes per sector is used. Will this tool on Samsung's site do anything other than format the drive in the same way doing mkntfs -s 4K /dev/sdb1 would do? To be specific, I'm intending to use this drive on a machine that will primarily run Windows XP, but I'd rather boot into Linux/BSD and format the disk manually than have bloated software. I do want to have the new AF style sectors though -- that's essential. So if I did the command above, would it have exactly the same effect as using the advanced format tool?

    Read the article

  • Restoring using SyncBack without profiles

    - by Thomas Matthews
    I backed up my internal hard drive (C:) using SyncBack onto an external (USB) hard drive with maximum compression. I then performed a clean install of Windows Vista onto the computer. I forgot to copy the SyncBack logs before the clean install. And now when ever I try to restore a directory, the RAR/ZIP files are copied to the system hard drive instead of extracting their contents to the hard drive. Also, SyncBack is not traversing the folders during the Restore process. How can I tell SyncBack to expand the compressed files? I am running the freeware version of SyncBack. I have to create new log files (unless SyncBack put them somewhere on the external drive). My alternative is to write a program that traverses the folders on the external drive and extracts files from the RAR/ZIP files. I am using Windows Vista, Service Pack 2, and the data size prior to backup was about 200 GB. (The backup process took over 72 hours due to "hiccups").

    Read the article

  • Should I reinstall my production server from 32 bit to 64 bit if it has 16GB of RAM?

    - by Alexandru Trandafir Catalin
    I have a production server with 16GB of RAM that came with a 32bit CentOs installation. The website hosted on this server is increasing its traffic every day and I am having some MySQL trouble so I tried to check the MySQL configuration with mysqltuner.pl and gave me the following messages: [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** So my question is: can I survive with the 32 bit? Or I will have to install the 64 bit OS? Thanks.

    Read the article

  • Why can't windows see mmcblk0p3? [closed]

    - by jacknad
    The partition is created on the embedded linux target like this # n - new # p - partition # 3 - partition 3 # 66 - starting cylinder # <blank> - maximum size for the ending cylinder # t - set file system type # 3 - partition 3 # c - set to windows vfat # w - write partition table and exit echo -e "n\np\n3\n66\n\nt\n3\nc\nw" | fdisk /dev/mmcblk0 The file system is then formatted on the embedded linux target as MS-DOS like this # -n volume-name # -F FAT-size mkfs.vfat -n DB -F 32 /dev/mmcblk0p3 A linux host can mount and access files in mmcblk0p3 without issue. Why can't windows? Edit: Although the default number of FATS is 2 I tried adding -f 2 [number-of-FATs] since this is actually being done by busybox on an embedded platform but this didn't help. I understand the Linux MS-DOS file system does not support more than 2 FATs but there are only 2 on this target (the boot is also FAT which is visible), along with and EXT3 (on p2) for the root file system.

    Read the article

  • Cheapest server to run Windows 2008 R2?

    - by chopps
    Hey Everyone, I want to build a really cheap server to use for testing etc but don't want to spend alot of dough. Any recomendations on what kind of home pc/server would work for these requirements? Any place to get refurbs at a good price? Component Requirement Processor Minimum: 1.4 GHz (x64 processor) Note: An Intel Itanium 2 processor is required for Windows Server 2008 for Itanium-Based Systems Memory Minimum: 512 MB RAM Maximum: 8 GB (Foundation) or 32 GB (Standard) or 2 TB (Enterprise, Datacenter, and Itanium-Based Systems) Disk Space Requirements Minimum: 32 GB or greater Foundation: 10 GB or greater Note: Computers with more than 16 GB of RAM will require more disk space for paging, hibernation, and dump files Display Super VGA (800 × 600) or higher resolution monitor Other DVD Drive, Keyboard and Microsoft Mouse (or compatible pointing device), Internet access (fees may apply)

    Read the article

  • SPF include: too many IP addresses

    - by sprezzatura
    I've hit a snag with SPF. The SPF record for my domain will contain four or five entries, plus it will contain: include:sgizmo.com The SPF record for sgizmo.com contains eleven entries! This, plus mine, is way over the maximum ten allowed by the RFC (and probably by most servers). I realize that there has to be a limit in order to prevent DoS attacks. However, in the real world, it is probably not unreasonable for large companies to have many server addresses. Furthermore, must I know monitor my 'include:' counterparts for changes and additions? Must I check weekly, daily, to insure that some combination of changes doesn't suddenly put me over the top? It doesn't seem to me that SPF is suitable for prime time. Is there another way to do this?

    Read the article

  • In what way does non-"full n-key rollover" hinder fast typists?

    - by Michael Kjörling
    Wikipedia claims (although the latter claim does not cite a source) that: High-end keyboards that provide full n-key rollover typically do so via a PS/2 interface as the USB mode most often used by operating systems has a maximum of only six keys plus modifiers that can be pressed at the same time.[4] This hinders fast typists, ... In what way would the system being able to recognize only six non-modifier keys at once hinder a fast typist? I consider myself a relatively fast typist and I usually press one key, plus modifiers, at once; I can't imagine any real-life situation in which the system only recognizing six non-modifier keys being pressed at once has been a limiting factor in my keyboard usage. (Multi-stroke keyboard shortcuts as used by high-end software like Visual Studio, Emacs and the like are a different matter.) Note that I am not really interested in answers centered around multiplayer computer games; I'm looking for answers that give reasons that would be relevant to typists, somehow supporting the statement made on Wikipedia.

    Read the article

  • Is it Possible to Increase Display Resolution for OS X Maverick

    - by Michael
    The new OS X Maverick operating system has reduced maximum display resolution from 1920 x 1200 in Mountain Lion to 1680 x 1050, which is a SIZABLE reduction. The difference is obvious when viewing videos or photos. In addition, the colors are less vibrant. Does anyone know a way to change the display resolution for Maverick, thus restoring Mountain Lion resolution (1920 x 1200)...along with color vibrancy. By the way, I am using a 2012 Macbook Pro, with Matte display, which I think makes matters worse. At 1920 x 1200 my Macbook Pro was excellent...but at 1680 x 1050, it is very pedestrian.

    Read the article

  • What PSU is usually used in mini-ITX cases/chassis?

    - by Subaru Tashiro
    The mini-ITX computer will be a general use computer. Not a dedicated HTPC or Home server. In general use mini-ITX cases, what PSU form factor is usually used? I understand that some case manufacturers provide custom built PSU to fit their case but I prefer to get the ones that use a PSU that follows standard form factors in case a replacement is needed. For example, what PSU fits into general purpose cases by Lian Li? Am I to assume that smaller PSU form factors also affect the possible maximum output?

    Read the article

  • Installed over 4G RAM on 32-bit OS? [closed]

    - by kai
    Possible Duplicate: 32-bit Windows Server address > 4GB RAM - How? I know that for 32-bit OS, the addressable memory space for each process is "4G" (maybe just 3G in user space...). If I have a 8G RAM, is it correct that all of the processes can still utilize (shared) these 8G memory but each of them are limited to a maximum 4G? Or the whole system only can see and utilize 4G out of 8G and thus having 8G RAM on a 32-bit OS is the same as having 4G RAM on it?

    Read the article

  • AMD FX 8350 potentially overheating

    - by rhughes
    I am worried that my CPU is overheating when running at maximum capacity. I have not overclocked the machine. The machine often powers down after a couple of minutes of max CPU usage. I get the following events in the Event Log after the crash: Log Name: System Source: Microsoft-Windows-Kernel-Power Date: 11/10/2013 12:05:40 Event ID: 41 Task Category: (63) Level: Critical Keywords: (2) User: SYSTEM Computer: home-pc Description: The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. What can I do to confirm this or further narrow down this issue? Due to the sudden nature of the crash, no MEMORY.DMP is created I believe. I am happy to post any extra information that is needed.

    Read the article

  • Memeory Leak in Windows Page file when calling a shell command

    - by Arno
    I have an issue on our Windows 2003 x64 Build Server when invoking shell commands from a script. Each call causes a "memory leak" in the page file so it grows quite rapidly until it reaches the maximum and the machine stops working. I can reproduce the problem very nicely by running a perl script like for ($count=1; $count<5000; $count++) { system "echo huhu"; } It is independent of he scripting language as the same happens with lua: for i=1,5000 do os.execute("echo huhu") end I found somebody describing the same issue with php at http://www.issociate.de/board/post/454835/Memory_leak_occurs_when_exec%28%29_function_is_used_on_Windows_platform.html His solution: Firewall/Virus Scanner does not apply, neither are running on the machine. We can also reproduce the issue on other Developer Machines running XP 64, but not on XP 32 Bit. The guilty guy for the allocation is C:\WINDOWS\System32\svchost.exe -k netsvcs which runs all the basic Windows services. Does anybody know the issue and how to resolve it ?

    Read the article

  • How do MaxSpareServers work in Apache?

    - by John Hunt
    I've scoured the web but I can't find out what MaxSpareServers are in Apache MPM prefork.. The MaxSpareServers directive sets the desired maximum number of idle child server processes. An idle process is one which is not handling a request. If there are more than MaxSpareServers idle, then the parent process will kill off the excess processes. Great, but what causes a spareserver to be created? More importantly, when does a spare server go away? I understand that minspareservers are created gradually after the server is started.. How do maxspareservers relate to maxclients? Basically I'm at a bit of a loss on how best to configure Apache.. there's a lot of documentation out there but it isn't that clear. Thanks, John.

    Read the article

  • Which JMX statistics to watch out for in Catalina/Tomcat?

    - by geoaxis
    I have configured OpenNMS to collect all kinds of numeric data coming out of tomcat7 jmx. There are a lot of things. I am interested in monitoring this tomcat instance so that I can avoid down time and lockups. What metrics should I be watching out for? I am already monitoring things like CPU, Memory, Network via SNMP. With this JMX connection the things that I find interesting are Catalina:type=GlobalRequestProcessor,name="ajp-bio-/a.b.c.d-XXXX" RequestsCount so far. Catalina:type=Manager,context=/myApp,host=localhost Active sessions and its maximum so far

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >