Search Results

Search found 2200 results on 88 pages for 'human factors'.

Page 41/88 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Effects of HTTP/TCP connection handshakes and server performance

    - by Blankman
    When running apache bench on the same server as the website like: ab -n 1000 -c 10 localhost:8080/ I am most probably not getting accurate results when compared to users hitting the server from various locations. I'm trying to understand how or rather why this will effect real world performance since a user in china will have different latency issues when compared to someone in the same state/country. Say my web server has a maximum thread limit of 100. Can someone explain in detail how end user latency can/will effect server performance. I'm assuming here that each request will be computed equally at say 10ms. What I'm not understand is how external factors can effect overal server performance, specifically internet connections (location, or even device like mobile) and http/tcp handshakes etc.

    Read the article

  • IIS7 failover cluster across datacenters

    - by Scott
    Hello, I have servers in two different datacenters with each datacenter getting static IPs. What I would like to do is setup the servers as IIS7 servers and allowing them to failover from datacenter to datacenter with little (or preferably) no interruption. Servers on both sides are running Windows Server 2008 x64 with IIS7 (or 7.5 if needed). I am interested in how to point DNS traffic to the new datacenter without manual human intervention. For example: Datacenter A: IP: 192.168.1.115 Servers: Server 2008 x64 w/ IIS 7 Datacenter B: IP: 192.168.1.220 Servers: Server 2008 x64 w/ IIS 7 Other information: Domain Name: Example.org Domain DNS: 192.168.1.115 If Datacenter A connectivity went down (broken service line, etc.) how does the traffic know to route to Datacenter B on 192.168.1.220? Thanks, Scott

    Read the article

  • Dell R510 vs R710

    - by AX1
    Hello, the Dell R510 and R710 can both hold regular configurations (e.g. X5650, 24 GB RAM, etc.) and these usually come out to about the same price. Is there a particular reason why one would choose the R510 over the R710 or vice versa? There really appears a lack of differentiating factors. The only 'major' factor I found, which doesn't apply to me though, is that the R510 can hold up to 12 3.5in HDDs while the R710 (which is slightly more expensive) can only hold up to 6 3.5in HDDs. Maybe you guys have some input and bought either of these machines (or both) to shed some light on other differences and why someone should choose one over the other as the pricing is pretty much the same with my configuration. Thanks!

    Read the article

  • Using tshark to generate traffic logs every X seconds

    - by Sridhar Iyer
    I'm trying to use tshark to maintain a running history of all the packets that are going through an interface, for say 30 seconds. I want it to be human readable. This is a linux machine, and without mucking too much into the netstack source (which I can do if push comes to shove), I was wondering if I can use tshark to this. tshark has a -b duration:10 -b files:2 which I can use to generate a rotating set of 2 files, but I don't know which format it is printing the file in or how to read it.

    Read the article

  • Command line tool for listing ID3 tags under Linux

    - by petersohn
    I want to write a script that manipulates ID3 tags of mp3 files. I need a tool that reads the tags and outputs it in a format in a machine-readable format. For example, if I want it to output only the title, then it outputs the title, nothing else. I tried different tools like id3 or eyeD3, but they can only be used to write tags or to output them in a human-readable format. Of course I could just filter that output through sed, but it seems unnecessarily complicated to me.

    Read the article

  • Sustaining Dual Channel among many RAM modules

    - by Odys
    I'd like to know what are the factors that need to be set in order to sustain the Dual Channel mode. In a mobo with 4 DDR3 slots: Do I have to put pair of chips? Eg: If I put 3 identical chips only, will I have Dual channel or not? If I put 4 Ram chips that aren't from of same ventor/model, will I have the same latency among them (the highest of all)? Also, will I sustain Dual Channel mode? If one Ram has max frequency of 1033 and the other 3 chips are of 1300, will I have 1033Mhz for all chips and Dual Channel mode on? What if I put 2x4Gb and 2x8Gb chips (latency, Dual Channel)? Can I put 4Gb chips in slots 1 and 3 and 8Gb in slots 2 and 4 and still have dual channel mode enabled? (Some of the questions might sound silly but their answers aren't that clear to me) (Also, assume that there aren't any bottlenecks because of other parts on the system)

    Read the article

  • Print full path of files and sizes with find in Linux

    - by cat pants
    Here are the specs: Find all files in / modified after the modification time of /tmp/test, exclude /proc and /sys from the search, and print the full path of the file along with human readable size. Here is what I have so far: find / \( -path /proc -o -path /sys \) -prune -o -newer /tmp/test -exec ls -lh {} \; | less The issue is that the full path doesn't get printed. Unfortunately, ls doesn't support printing the full path! And all solutions I have found that show how to print the full path suggest using find. :| Any ideas? Thanks!

    Read the article

  • iptables: How to read this OPT string?

    - by alex
    I have a simple INPUT rule for iptables that logs any new connections to a logfile. --log-tcp-options and --log-ip-options flags are both set and I get the appropriate OPT output. One line of my log looks something like this: Nov 29 17:00:00 IN=venet0 OUT= MAC= SRC=x.x.x.x DST=x.x.x.x LEN=64 TOS=0x00 PREC=0x00 TTL=53 ID=37898 DF PROTO=TCP SPT=57755 DPT=8888 WINDOW=65535 RES=0x00 SYN URGP=0 OPT (0204057D010303010101080A3E521D4D0000000004020000) I would like to understand how to interpret the OPT string (bold). Is there some documentation available on what it actually means? How could I make it human-readable?

    Read the article

  • Which is better for running Ubuntu and other Linux OSes, Chromebook or Windows, why? [on hold]

    - by Serge
    I'm learning programming and I would like to switch to a Linux OS, perhaps Ubuntu, to continue this, but the current machine is generally getting pretty old and slow and Windows is the least favorite option for production, and I can manage getting something new right around the price range of the nicest Chromebook on the market right now. However, I have compared specs of HP Chromebook 14 with those of regular PC laptops that roughly cost the same, and the latter consistently have approximately the same and sometimes higher (like the processor speed) specs. Yet usage of Chromebooks for this purpose is pretty widespread nowadays. Is this because they were initially built for a Linux OS - and is it really THAT crucial - or are there other major factors at play here?

    Read the article

  • china and gmail attacks

    - by doug
    "We have evidence to suggest that a primary goal of the attackers was accessing the Gmail accounts of Chinese human rights activists. Based on our investigation to date we believe their attack did not achieve that objective. Only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.” [source] I don't know much about how internet works, but as long the chines gov has access to the chines internet providers servers, why do they need to hack gmail accounts? I assume that i don't understand how submitting/writing a message(from user to gmail servers) works, in order to be sent later to the other email address. Who can tell me how submitting a message to a web form works?

    Read the article

  • Looking for Light Time Management Software Suggestions (for Mac)

    - by tmo256
    I'm looking for a simple project management app that performs task scheduling, along the line of Merlin or MS Project, but no where near as robustly. I don't need to deal with other (human) resources, but I work on anything from 3 to 6 different projects at a time. What I'd like is to be able to input deadlines and tasks, and have a schedule suggested to complete them. I do technical work, but I don't think I need anything specifically for software development, especially considering I do plenty of other kinds of things, like graphic design and social media PR. I'd really like this to be dead simple, as simple as possible. Suggestions? OmniPlan, something web-based? Definitely cannot afford anything too extravagant, really looking for something under $200. Thanks for your input!

    Read the article

  • Export SharePoint Wiki to PDF from the Command Line

    - by Wyatt Barnett
    We use a SharePoint wiki* at the office to serve as a knowledgebase for our IT operations. Recently we went through a disaster recovery exercise where we realized we had a key hole in our plans: how do you restore the services if your instruction manual is down because some services are offline? Anyhow, we did realize that the wiki angle was definitely something we wanted to keep, but rather that we should explore a way to create offline backups of the wiki which could be easily read using common software we should be able to setup without any knowledge from the wiki. So, does anyone know of a good utility that can take a SharePoint wiki and dump it to PDF/Word/RTF/[INSERT HUMAN FRIENDLY FORMAT] easily from the command line? *-Yes, there are better solutions out there. But this was easy and used existing infrastructure and generally does what we need it to do.

    Read the article

  • Does removing admin rights really mitigate 90% of Critical Windows 7 vulnerabilities found to date?

    - by Jordan Weinstein
    Beyondtrust.com published a report, somewhat recently, claiming among other quite compelling things, "90% of Critical Microsoft Windows 7 Vulnerabilities are Mitigated by Eliminating Admin Rights" Other interesting 'facts' they provide say that these are also mitigated by NOT running as a local admin: 100% of Microsoft Office vulnerabilities reported in 2009 94% of Internet Explorer and 100% of IE 8 vulnerabilities reported in 2009 BUT, reading the first page or so of the report I saw this line: A vulnerability is considered mitigated by removing administrator rights if the following sentence is located in the Security Bulletin’s Mitigating Factors section, ?Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights. could be sounds pretty weak to me so and I wondered how valid all this really is. I'm NOT trying to say it's not safer to run without admin rights, I think that is well known. I just wonder if these stats are something you would use as ammo in an argument, or use to sell a change like that (removing users as local admins) to business side? Thoughts? Link to the report (pdf) [should this supposed to be a community wiki?]

    Read the article

  • MAC and IP adress text-identicon as avatar

    - by rubo77
    I would like to create something like identicons but not with images but with a unique word for each IP-Adresses and MAC-Adresses. create an easy to remember alias for a mac address, that is unique and reverse lookupable, for example: IP 123.456.789.132 will result in an alias for that IP, that is connected to an existing word from a wordlist, that is unique. Background of this idea: this way we could identify our Routers in our Opennet in Hamburg easily in a graphical NodeGraph. Is there some site already, where I can convert MAC-Adresses to unique human readeable words?

    Read the article

  • Amazon EC2 migration from one region to the other

    - by Gnanam
    I'm using the following Amazon EC2 resources in the US East (Virginia) region: 1 Running Instance 1 Elastic IP 2 EBS Volumes 100 EBS Snapshots 1 Key Pair 2 Security Groups 5 My Own AMIs (customized based on my application stack) My instance is based on Linux distribution (CentOS) and my AMIs are S3-backed. Both EBS volumes are mounted on this running instance. We're planning to migrate our deployment to US-West region. Because Amazon EC2 resources are not shared across regions, my questions are: What are all the factors that I need to consider in advance? What are all the recommended & different ways of migrating each EC2 resources from one region to the other? Are there any hidden risks involved during and/or after the migration? Experts ideas/suggestions/recommendations on this are highly appreciated.

    Read the article

  • Analyzing Linux NFS server performance

    - by Kamil Kisiel
    I'd like to do some analysis of our NFS server to help track down potential bottlenecks in our applications. The server is running SUSE Enterprise Linux 10. The kind of things I'm looking to know are: Which files are being accessed by which clients Read/write throughput on a per-client basis Overhead imposed by other RPC calls Time spent waiting on other NFS requests, or disk I/O, to service a client I already know about the statistics available in /proc/net/rpc/nfsd and in fact I wrote a blog post describing them in depth. What I'm looking for is a way to dig deeper and help understand what factors are contributing to the performance seen by a particular client. I want to analyze the role the NFS server plays in the performance of an application on our cluster so that I can think of ways to best optimize it.

    Read the article

  • Overcrowded Windows XP Folders

    - by BlairHippo
    I know that, technically, an individual Windows XP directory can hold an immense number of files (over 4.29 billion, according to a quick Google search). However, is there a practical ceiling where too many files in one directory starts having an impact on reads to those files? If so, what factors would exacerbate or help the issue? I ask because my employer has several hundred XP machines in the field at client sites, and the performance on some of the older ones is getting "sludgy." The machines download and display client-defined images, and my supervisor and I suspect that our slacktastic approach to cache management could be to blame. (Some of the directories have tens of thousands of images in them.) I'm trying to gather evidence to support or contest the theory before spending time on a coding fix.

    Read the article

  • What kind of server hardware is roughly necessary to serve website to 10k users?

    - by jcmoney
    I've been looking at VPS's and the specs they offer for entry level setups seems somewhat surprising to me. I'm am new to this topic but many of VPS offer less than 512MB of memory and my laptop has 4GB of memory so I am curious what does it actually take in terms of hardware to serve say 10k users (say 5k daily active users)? I figure a large number of factors can probably sway this a lot but just for benchmarking, say the site is a social networking site written in php using mysql + apache that's not really doing anything unusual like serving lots of media. So essentially a very basic Facebook minus the absurd number of photos and videos. What about 100k users (50k daily active)? 1 million (500k daily active)? Thanks in advance.

    Read the article

  • Puppet performance compared to cfengine

    - by Andy
    I'm considering using Puppet or cfengine. Key factors are performance, and research on the internet suggests cfengine uses less memory and CPU cycles compared to puppet. However, puppet seems easier to use. I need to manage several web servers, as well as handheld tablets and machines that will only connect to some central control servers periodically. All are Linux machines. Would I be able to use either puppet or cfengine for this? And if so, does puppet still make poor use of resources? I'd like to use puppet because it seems simpler, but a lot of the articles I've found refer to cfengine 2 - is cfengine 3 easier to configure? Thanks

    Read the article

  • High Load - Low IO - Low CPU usage

    - by devup
    I have a system whose load is rather high. As you can see from the top output below, CPU usage and I/O is negligible: top - 17:31:59 up 4 days, 2:34, 2 users, load average: 1.00, 0.99, 1.00 Tasks: 71 total, 1 running, 70 sleeping, 0 stopped, 0 zombie Cpu(s): 2.0%us, 2.0%sy, 0.0%ni, 95.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 960720k total, 707288k used, 253432k free, 67328k buffers Swap: 2811896k total, 2644k used, 2809252k free, 528928k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15310 root 20 0 2512 1128 888 R 2.1 0.1 0:00.05 top I would appreciate any assistance with isolating the cause(s) of high load for when I/O and CPU are not factors.

    Read the article

  • Optimizing wireless router speed and minimizing interference.

    - by Tchalvak
    I've been experiencing problems with my wireless connectivity lately, and want to make sure that it's not related to the abundance of other wireless routers here in my building. So, what I'm looking for is a method (probably via some application or another) to audit the wireless channels (and other factors that might be important that I don't even know of yet) that are floating through the aether around me. Ubuntu or other linux apps are preferred, but some kind of windows/mac solution is possible, since I do have other OSes around me that I could install & test on. Router: netgear WGT624 v3 Hearsay tells me that channels 1, 6, and 11 are "non-overlapping" (I expect they aren't used for non-wireless-router purposes or something, not sure how they couldn't overlap with other routers using other channels), so perhaps my best choices of channel are limited, so if channels aren't really a big concern, I'd be happy to get links to other optimizations that I should look into.

    Read the article

  • Is it safe to delete "Account Unknown" entries from Windows ACLs in a domain environment?

    - by Graeme Donaldson
    It's not uncommon to see entries in Windows ACLs (NTFS files/folders, registry, AD objects, etc.) with the name "Account Unknown (SID)". Obviously these are because of old AD users or groups which at some point had permissions manually configured on the relevant object and have since been deleted. Does anyone know if it is safe to remove these "Account Unknown" ACEs? My gut feeling is that it should be just fine, but I'm wondering if anyone has any past experiences where doing this has caused trouble? Normally I just ignore these, but the company I'm working at now seems to have an abnormal number of these, most likely due to past admins' inexperience with AD/Windows and assigning permissions to user accounts rather than groups in all sorts of weird places. FWIW, our environment is not complex, a single domain forest, 4 DCs in 3 sites, with all network connectivity and replication healthy, so I'm certain that these "Account Unknown" entries are really old accounts, and not just because of some failure to resolve the SID to a human-readable name.

    Read the article

  • HPC Cluster planning workflow?

    - by Veronica
    After three days of intensive Google searching, I have not found any high-level workflow of how to build a low profile - cheap - computing cluster (we are not interested in HA yet). This is just a front-end plus a node for now. We want to start small with rockscluster, provide a web-based server for offering services, and then add nodes as our budget increases. We're small company, so we haven't enough human resources to implement it smoothly. Here are some facts about our environment: Our hardware is not constant (we will add nodes). Our workload will vary (in the order from 200Mb - 1Tb) Our software will change (scientific applications for data mining) Do you know any visual workflow, worksheet, chart, describing the general necessary steps to begin our cluster planning?

    Read the article

  • Why times elapsed connecting to a server are different?

    - by user1634619
    I have a small program which connects to a server of my choice and measures the time elapsed to do so. Each time I run it it returns different result. My question is what does this time depend on ? Network congestion for one. If I choose a server that has multiple addresses e.g. google.com the length of physical link may differ from time to time ? Is it safe to assume that it also affects connection time ? Are there any other factors in place ?

    Read the article

  • SQL Server cluster performance baseline

    - by Dwight T
    Currently I'm tasked with getting a good performance baseline on a SQL 2005 cluster. The main db on the server is for Sharepoint, but I would like to add other dbs on the cluster. I do have access to Quest's Performance Analysis tool to help. What are key factors to look at to see if the cluster can handle additional dbs? Do you look at different performance indicators for a cluster vs a stand alone sql server? One db will be a low usage transactional db and a read only db that is used for sales data. Thanks Dwight

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >