Search Results

Search found 8692 results on 348 pages for 'per magnusson'.

Page 249/348 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • What should I be doing while I wait for a progress bar?

    - by Malnizzle
    So I am sitting here waiting for a progress bar to run (20 mins or so), and was wondering how best to use my time as a SysAdmin. I debated not posting this question briefly, as this could get flagged as subjective, but I think it's an important question, and a question that can be legitimately answered (per the FAQ) I know this something a lot of sys admins deal with, especially if they are client-based I would venture to guess. There is a lot of material out there about how to multi task, but SysAdmin work is unique in this area as well. I could switch over to another project, but I could get wrapped up in that, and forget about the original project I was working on, and that's hard if you are billing a client for your time, both for tracking your time, as well as being fair to that client. I could check ServerFault, but that isn't directly work related, I could sort my email, so on and so forth. What do you do, or what should I do when I have time waiting for a progress bar? Thanks! (download done, back to work!)

    Read the article

  • High disk I/O - jbd2/sda2-8 process

    - by Evan Hamlet
    I have run a file server on a CentOS 5.8 final server. My only concern at the moment is what appears to be intermittent but continuous high disk I/O activity causing a general slowdown because of jbd2/sda2-8 process. jbd2/sda2-8 is making use of /dev/sda2, which is the 2nd partition of the first harddrive (IE: root partition). More info: using "iotop" the culprit appears to be "jbd2/sda1-8" making writes every second, which appears to be a kernel process associated with journaling on the ext4 filesystem, if my googling around is correct. I see "jbd2/sda2-8" appearing here every now and then, but certainly not every 3 seconds.. when idle, it appears about 1 or 2 times per minute. When I'm using the system, it appears more frequently. ATOP results: http://grabilla.com/02b14-8022db2e-4eb9-4f10-8e10-d65c49ad7530.png IOTOP results: http://grabilla.com/02b14-cf74b25d-4063-4447-9210-7d1b9b70e25b.png HTOP results: grabilla. com/02b14-ad8cad0e-89b0-46d3-849d-4fd515c1e690.png jbd2/sda2-8 is the processes I see with iotop making writes on disk even though it's not in use at all. Does someone has any idea how could I solve the high disk usage caused jbd2/sda2-8 process?

    Read the article

  • Logstash agent doesn't run as a daemon on MAC OS X 10.9.1

    - by user329324
    I need to run the logstash agent as a Daemon on an MAC OS X System whenever the system boots up terminal: /usr/local/logstash/bin/logstash agent -f /usr/local/etc/cvlog.conf Per terminal the program is working succesfully but as an daemon it doesn't start. My com.bcd.logstash.plist <plist version="1.0"> <dict> <key>Label</key> <string>com.bcd.logstash</string> <key>KeepAlive</key> <dict> <key>SuccessfulExit</key> </false> </dict> <key>ProgramArguments</key> <array> <string>/usr/local/logstash/bin/logstash</string> <string>agent</string> <string>-f</string> <string>/usr/local/etc/cvlog.conf</string> </array> <key>RunAtLoad</key> </true> </dict> </plist> I start with: launchtl load /Library/LaunchDaemons/com.bcd.logstash.plist Syslog Error Message com.apple.launchd[1] (com.bcd.logstash[pid]): Exited with code:1 com.apple.launchd[1] (com.bcd.logstash[pid]): Exited with code:143 What's wrong with my plist?

    Read the article

  • Are there cloud network drives that let users lock files or mark them as "in use"?

    - by Brandon Craig Rhodes
    Having spent several hours reading about the features and limitations of services like DropBox and Jungle Disk and the hundreds of competitors they seem to have (as though everyone with an AWS account these days goes ahead and writes a file sharing application just for fun), I have yet to find one that would let a team of people at a small business collaborate without stepping all over each other's toes. At a small business there are often many small documents per project — estimates, contracts, project plans, budgets — and team members frequently have to open and edit them, with all sorts of problems happening if two people edit a file at once. Even if a sharing service is smart enough to keep both versions of the file created, most small-business software (like word processors, spreadsheets, estimating software, or billing systems) has no way to compare — much less to merge! — the changes in two rival versions of a file that two people edited at the same time without each other's knowledge. So, my question: are their cloud-based file sharing solutions that not only provide a virtual network drive that people can access, but that also let users lock files — even if it's not a real lock but just a flag or indicator — that could possibly prevent remote workers from both editing the same file at once? Having one person wait for another person to finish editing is a very, very small inconvenience compared to the hour or more than it can take to compare two estimates by hand until you find and resolve the rival changes. Given this fact, I am surprised that almost none of the popular file sharing solutions seem to recognize this problem and provide some solution! Does anyone know of a service that does?

    Read the article

  • Measure Total Bandwidth for Billing

    - by TonyZ
    I am setting up a new network which customers will host their applications on. It needs to be able to scale out to a few hundred servers and each server will have several VMs on it. Right now in my test environment, after the telco router, we are using a Linux router/firewall which is then connected to a Layer 2 switch. Could be a layer 3 in the future. I need to track total bandwidth per VM for each machine, and I need to do it in a way that it is not part of the VM. Each VM will have a private class ip address which is Natted by the gateway, or we may eventually run more than firewall/reverse proxy off a layer 3 switch. So my thinking is that I can do it off of a promiscuous port on the switches, or at the gateway firewall. I would like to have an out of the box solution, preferably open source. Does anyone have suggestions on the easiest way to set this up, and the easiest tool to use. I have looked at the web sites for Nagios, Zenoss, Zabbix, ntops on the firewall, etc. It is hard to ascertain just from the web sites if they do exactly this or not. Obviously, performance is also somewhat key here. Anything running on the gateway should not drag it down doing traffic accounting. Thanks for any thoughts. Tony Zakula

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything. Thanks.

    Read the article

  • Microsoft Windows DHCP: Steering IPv4 clients into specific scopes based on MAC

    - by Easter Sunshine
    We have visitors on our campus who bring their own laptops and devices and use our wireless and wired networks. When we receive a copyright infringement notice (typically BitTorrenting), we are required to quarantine that MAC address so that it no longer has Internet access. No matter what website it tries to visit, it is sent to a web page explaining to the user that the device has been quarantined. We have thus far implemented this in ISC DHCP on Linux. We have multiple VLANs with one or more public-IP subnets and one RFC1918 quarantine subnet each. All clients are leased IPs in the public-IP subnet(s) unless you're in a list of known bad MACs. Then, you are sent to the quarantine subnet so that your traffic is unroutable on the Internet (you are isolated by subnet only, not by VLAN). We would like to move to Windows DHCP in light of the IPAM role but I cannot figure out how to replicate this in Windows DHCP 2012 (Assign DHCP IPs for specific MAC prefixes on Windows Server 2008 R2 suggests it was not possible in 2008 R2), even while using policies. So here's what I'd like: The administrator/help desk provides and maintains a list of MAC addresses that are to be quarantined. The DHCP server places those MACs into the quarantine subnet on the respective VLAN, no matter which VLAN the client is in. I don't think reservations would work: We currently have about 300 registered bad MACs and about 12 VLANs. I don't want to make 300 x 12 reservations nor have to add 12 reservations per new MAC address. Not to mention all of the quarantine subnets are /24s. We do not have NPS/NAC. You do not have to register your MAC address get network access. We use Cisco routers/switches. Thanks.

    Read the article

  • Typical outbound port list for guest access?

    - by Steve
    I manage a weekly rental house that includes wireless Internet access. I've allowed all outbound ports on my router but my ISP has disabled my Internet access twice now because guests have downloaded (or served up) copyrighted content. So I'd like to institute some port filtering to discourage p2p sharing (see disclaimer below). But I don't want to inconvenience the 99.9% of folks who keep things above-board. My question is, what outbound ports are typically open for rental/hotel wireless Internet access, or where can I find such a list? TCP 80,443,25,110 at a minimum. Though my own email service uses 995 and 465 for SSL, some may use IMAP, I personally use SSH and FTP, so I'll open those. Roughly I figure I need to open access to privileged ports, and close 1024 & above. Is there a whitelist I should institute for commonly used high ports? And does it make sense to block UDP 1024 ? Disclaimer: I realize anyone replying to this message could circumvent the port filtering and share content to their heart's content. I do not need comprehensive p2p blocking, which requires more than a port whitelist. Anyone staying at the house shoulders the responsibility for their Internet use, per the rental contract. Also anyone savvy enough to circumvent the port filters would hopefully be savvy enough to use some sort of peer blocking, thereby preventing the ISP from taking down the service.

    Read the article

  • Server nearly unusable when doing disk writes

    - by Wikser
    My question closely relates to my last question here on serverfault. I was copying about 5GB from a 10 year old desktop computer to the server. The copy was done in Windows Explorer. In this situation I would assume the server to be bored by the dataflow. But as usual with this server, it really slowed down. At least I could work with the remote session, even there was some serious latency. The copy took its time (20min?). In this time I went to a colleague and he tried to log in in the same server via remote desktop (for some other reason). It took about a minute to get to the login screen, a minute to open the control panel, a minute to open the performance monitor, ... Icons were loading maybe one per second. We saw the following (from memory): CPU: 2% Avg. Queue Length: 50 Pages/sec: 115 (?) There was no other considerable activity on the server. The server seldom serves some ASP.NET pages, which became also very slow in this time. The relevant configuration is as follows: Windows 2003 SEAGATE ST3500631NS (7200 rpm, 500 GB) LSI MegaRAID based RAID 5 4 disks, 1 hot spare Write Through No read-ahead Direct Cache Mode Harddisk-Cache-Mode: off Is this normal behaviour for such a configuration? What measurements could give further clues? Is it reasonable to reduce the priority of such copy I/O and favour other processes like the remote desktop? How would you do that? Many thanks!

    Read the article

  • Poor SSL performance with vsftpd

    - by petrus
    I'm trying to tweak vsftpd to achieve maximum performance for my usage: I have only one or two clients that connect to the server. File size is between ~15MB and 1GB. Typical transfer batch represent between 1 and 2GB of data. For testing purposes, I'm using a tmpfs on both sides (thus eliminating any disks bottleneck) with a single 1GB file. When SSL is disabled, performance is good, with a transfer rate at ~120MB/s (reaching the limits of gigabit networking). With SSL enabled only for control traffic (and not data traffic), performance drops at about 112MB/s, which is still within the acceptable limits. However, when SSL is enabled for data flows, the transfer speed drops dramatically: 6.7MB/s using 3DES & SHA (ssl_ciphers=DES-CBC3-SHA in vsftpd.conf) 16MB/s using DES & SHA (ssl_ciphers=DES-CBC-SHA) I didn't tested other ciphers, but from what I can see from the CPU usage during the transfer, it seems that vsftpd is only using a single cpu/core per client. While this can fit for large ftp sites with hundreds of clients, I'd like to avoid this behavior and use more ressources on the server. On a side note, if you have any ideas regarding other openssl ciphers...

    Read the article

  • Why are my socks proxies slow

    - by vps_newcomer
    I have a linux vps, and i have tried a few socks proxy setups to test their performance: All tests were using speedtest.net The standard ssh tunnel proxy 0.8mbit/s download and 0.1-0.2mbit/s upload speeds dante-server proxy 1.3mbit/s download and 0.4-0.5mbit/s upload I am wondering why are these speeds so slow? Is anything shaping them? Is it just the nature of socks proxies? I know that the ssh tunnel has to do encryption and what not so that is why its slow, but i was surprised to see that the second setup was also quite slow. On the VPS i have received download speeds of 25MB/s per second (thats about 200mbit/s and upload speed of atleast 5MB/s (haven't got a good enough pipe to test anything faster). The other option i was going to try is to setup OpenVPN and see how that goes, however i need to find a good tutorial as it's fairly complicated to setup. So why is it so slow? How can i test to see where the bottleneck is? How can i make it faster :D

    Read the article

  • Virtual Fileserver

    - by Sergei
    Hi, We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array. I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources. The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra. What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication. Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication. Thank you very much in advance, Please ask me if I missed any details!

    Read the article

  • How do I calculate clock speed in multi-core processors?

    - by NReilingh
    Is it correct to say, for example, that a processor with four cores each running at 3GHz is in fact a processor running at 12GHz? I once got into a "Mac vs. PC" argument (which by the way is NOT the focus of this topic... that was back in middle school) with an acquaintance who insisted that Macs were only being advertised as 1Ghz machines because they were dual-processor G4s each running at 500MHz. At the time I knew this to be hogwash for reasons I think are apparent to most people, but I just saw a comment on this website to the effect of "6 cores x 0.2GHz = 1.2Ghz" and that got me thinking again about whether there's a real answer to this. So, this is a more-or-less philosophical/deep technical question about the semantics of clock speed calculation. I see two possibilities: Each core is in fact doing x calculations per second, thus the total number of calculations is x(cores). Clock speed is rather a count of the number of cycles the processor goes through in the space of a second, so as long as all cores are running at the same speed, the speed of each clock cycle stays the same no matter how many cores exist. In other words, Hz = (core1Hz+core2Hz+...)/cores.

    Read the article

  • placing shell script under systemd control

    - by Calvin Cheng
    Assuming I have a shell script like this:- #!/bin/sh # cherrypy_server.sh PROCESSES=10 THREADS=1 # threads per process BASE_PORT=3035 # the first port used # you need to make the PIDFILE dir and insure it has the right permissions PIDFILE="/var/run/cherrypy/myproject.pid" WORKDIR=`dirname "$0"` cd "$WORKDIR" cp_start_proc() { N=$1 P=$(( $BASE_PORT + $N - 1 )) ./manage.py runcpserver daemonize=1 port=$P pidfile="$PIDFILE-$N" threads=$THREADS request_queue_size=0 verbose=0 } cp_start() { for N in `seq 1 $PROCESSES`; do cp_start_proc $N done } cp_stop_proc() { N=$1 #[ -f "$PIDFILE-$N" ] && kill `cat "$PIDFILE-$N"` [ -f "$PIDFILE-$N" ] && ./manage.py runcpserver pidfile="$PIDFILE-$N" stop rm -f "$PIDFILE-$N" } cp_stop() { for N in `seq 1 $PROCESSES`; do cp_stop_proc $N done } cp_restart_proc() { N=$1 cp_stop_proc $N #sleep 1 cp_start_proc $N } cp_restart() { for N in `seq 1 $PROCESSES`; do cp_restart_proc $N done } case "$1" in "start") cp_start ;; "stop") cp_stop ;; "restart") cp_restart ;; *) "$@" ;; esac From the bash script, we can essentially do 3 things: start the cherrypy server by calling ./cherrypy_server.sh start stop the cherrypy server by calling ./cherrypy_server.sh stop restart the cherrypy server by calling ./cherrypy_server.sh restart How would I place this shell script under systemd's control as a cherrypy.service file (with the obvious goal of having systemd start up the cherrypy server when a machine has been rebooted)? Reference systemd service file example here - https://wiki.archlinux.org/index.php/Systemd#Using_service_file

    Read the article

  • Simulated NAT Traversal on Virtual Box

    - by Sumit Arora
    I have installed virtual box ( with Two virtual Adapters(NAT-type)) - Host (Ubuntu -10.10) - Guest-Opensuse-11.4 . Objective : Trying to simulate all four types of NAT as defined here : https://wiki.asterisk.org/wiki/display/TOP/NAT+Traversal+Testing Simulating the various kinds of NATs can be done using Linux iptables. In these examples, eth0 is the private network and eth1 is the public network. Full-cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source iptables -t nat -A PREROUTING -i eth0 -j DNAT --to-destination Restricted cone iptables -t nat POSTROUTING -o eth1 -p tcp -j SNAT --to-source iptables -t nat POSTROUTING -o eth1 -p udp -j SNAT --to-source iptables -t nat PREROUTING -i eth1 -p tcp -j DNAT --to-destination iptables -t nat PREROUTING -i eth1 -p udp -j DNAT --to-destination iptables -A INPUT -i eth1 -p tcp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p udp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p tcp -m state --state NEW -j DROP iptables -A INPUT -i eth1 -p udp -m state --state NEW -j DROP Port-restricted cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source Symmentric echo "1" /proc/sys/net/ipv4/ip_forward iptables --flush iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE --random iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT What I did : OpenSuse guest with Two Virtual adapters - eth0 and eth1 -- eth1 with address 10.0.3.15 /eth1:1 as 10.0.3.16 -- eth0 with address 10.0.2.15 now running stund(http://sourceforge.net/projects/stun/) client/server : Server eKimchi@linux-6j9k:~/sw/stun/stund ./server -v -h 10.0.3.15 -a 10.0.3.16 Client eKimchi@linux-6j9k:~/sw/stun/stund ./client -v 10.0.3.15 -i 10.0.2.15 On all Four Cases It is giving same results : test I = 1 test II = 1 test III = 1 test I(2) = 1 is nat = 0 mapped IP same = 1 hairpin = 1 preserver port = 1 Primary: Open Return value is 0x000001 Q-1 :Please let me know If any has ever done, It should behave like NAT as per description but nowhere it working as a NAT. Q-2: How NAT Implemented in Home routers (Usually Port Restricted), but those also pre-configured iptables rules and tuned Linux

    Read the article

  • Best way to attach 96 tb to workstation

    - by user994179
    I'm running a workstation with dual xeon 5690's (12 physical/24 logical cores), 192 gb of ram (ie, maxed-out), Windows 7 64bit, 5 slots for adapter cards, and 1 tb of internal storage, with 5 more internal bays available. I have an app that creates data files totaling about 88 tbs. These are written once every 14 months, and the rest of the time the app only needs to read them; and 95% of the reads are sequential reads of huge chunks of data. I have some control over how big the individual files are, but ideally they would be between 5 and 8 tbs. The app will be reading from only one drive at a time, and the nature of the data is such that if (when) a drive dies I can restore the data to a new disk from tape. While it would be nice to be able to use the fastest drive/controllers available, at this point size matters more than speed. After doing lots of reading, I am leaning toward buying a bunch of cheap 2tb drives and putting them into a bunch of cheap enclosures. All this stuff is going into my home office, so I need to avoid the raised floor/refrigerated approach. My questions: Is the cheap drive/enclosure solution the best one for this situation? Given the nature of the app and the way the data is used, does RAID make sense? If so, which one? For huge sequential reads, would Usb 3.0 and eSata be a wash performance-wise? For each slot available on the workstation, can I hook up an enclosure that can hold multiple drives? Or is it one controller per drive? If I can have multiple drives on one controller, am I essentially splitting the bandwidth (throughput)? For example, if I have a 12 bay enclosure, is the throughput of the controller reduced by a factor of 12? Are there any Windows 7 volume/drive/capacity limits I should be aware of? Thanks

    Read the article

  • Vanishing Windows Desktop Shortcut Keys

    - by Henry Keiter
    The Situation Like you, I have many applications that I like to open. I've set up keyboard shortcuts for the most common, by placing a link on the desktop and setting its Shortcut Key property: This is all fine and dandy, most of the time. When I want to bring up the GIMP, I press Ctrl+Alt+G and the GIMP launches. Lovely. The Problem Sometimes--perhaps once a month per desktop shortcut--the shortcut key assignment simply vanishes. I press Ctrl+Alt+G and nothing happens, so I go check the shortcut and see that lo and behold: nothing is there. This happens regularly to all my shortcuts (not all at once). It doesn't matter what keys I assign, and there doesn't seem to be any correlation with particular applications being open or anything of that sort. This has happened on every Windows XP machine I've ever used with any regularity. Obviously what makes this issue particularly obnoxious is that it's not easily reproducible. I have searched long and hard for a solution for (or at least acknowledgement of) this problem, to no avail, so hopefully you guys know something that I don't. I did find this question, where the answers are all basically "use a third-party app", but as far as I could tell that was a slightly different issue, related to Explorer being busy. If the solution for this turns out to be the same, fine, but I'd prefer a native fix if at all possible. Note: I've tagged this with Windows in general because I seem to remember it happening on Windows 7 as well as XP, but I rarely notice it because I use the start-menu search in preference to desktop shortcuts.

    Read the article

  • How to debug slow queries in Django+Postgres

    - by lacker
    My database queries from Django are starting to take 1-2 seconds and I'm having trouble figuring out why. Not too big a site, about 1-2 requests per second (that hit Django; static files are just served from nginx.) The thing that confuses me is, I can replicate the slowness in the Django shell using debug mode. But when I issue the exact same queries at an sql prompt they are fast. It takes about a second for a query to return, but when I check connection.queries it reports the time as under 10 ms. Here's an example (from the Django shell): >>> p = PlayerData.objects.get(uid="100000521952372") >>> a = time.time(); p.save(); print time.time() - a 1.96812295914 >>> for d in connection.queries: print d["time"] ... 0.002 0.000 0.000 How can I figure out where this extra time is being spent? I'm using Apache+mod_wsgi in daemon mode, but this happens with just the django shell as well, so I figure it is not apache-related.

    Read the article

  • ffmpeg: converting an avi to a reduced, shareable flv/mp4...

    - by meder
    I recently followed a guide and recompiled my ffmpeg so x264 is enabled. I used some generic settings to convert my 700 MB avi file to a mp4 file, the result was a 407MB mp4 file. The original avi file's settings: Codec: DX50 Resolution: 704x304 Frame rate: 23.976023 Stream 1 Codec: mpga Type: Audio Channels: 2 Sample rate: 48000 Hz Bitrate 179 kb/s Command I used: ffmpeg -i input.avi -acodec libfaac -ab 128k -ac 2 -vcodec libx264 -vpre hq -crf 22 -threads 0 output.mp4 The settings of the output file (output.mp4): Codec: avc1 Resolution: 704x304 Display resolution: 704x304 Frame rate: 11.988011 Stream 1 Codec: mp4a Type: Audio Channels: 2 Sample Rate: 48000 Hz Bits per sample: 16 Bitrate: 1536 kb/s The quality of the output mp4 is pretty nice, it seems as if it's pretty much the same as the original source. However, I'm trying to reduce the filesize and I'm not really sure whether I should go with an flv format or keep it mp4. The advantage the flv would have obviously is that it would be playable with a flash player ( I have come across some swf players which take a flash parameter to play an flv file ).. but maybe I could use the video element, as I'm only going to be displaying this video privately so I don't have to worry about supporting legacy browsers such as IE. Can someone recommend some settings to specify in order for the filesize to be around ~100-150MB or so? I don't mind a reduction in quality, nor do I mind resizing it - I was going to do it initially but I wasn't sure what the guidelines were ( if any ) for dealing with resolution.. since this video's is 704x304 would it still be ok if I forced it into one that isn't perfectly fit for the aspect ratio? I have no clue about that part. I realize that I could have probably specified 28 instead of 22 for the CRF, I'm not sure if I should do that as opposed to maybe specifying smaller resolution, which might make it smaller as well?

    Read the article

  • ESXi Guests will not boot on IBM x3550 M3

    - by Adrian
    I have a problem with Guests not booting under VMWare ESXi 5.0 on my IBM x3550M3 server. VM Host Server: IBM x3550 M3 7944AC1 server w/ 2x Intel Xeon E5607 2.27Ghz CPUs ESXi 5.0.0 Build 623860, built for IBM Hardware downloaded from IBM Storage: 2x500GB SAS local storage 8GB RAM Vt is verified to be ENABLED Server Health Status: Normal The ESXi host boots just fine. The Client connects just fine. Guests can be configured but do not successfully boot. The initial guest memory consumption jumps up to 560MB and drops down to 40MB after a few seconds. Initial CPU usage is 1 full CPU (3000Ghz per the chart) and immediately drops downm to 29Mhz. Guests do not display any output in the Console tab but show a state of 'Powered On'. VMs are listed as Version 7 and the behavior is duplicated across all availabled Guest OS flavors. Problem also duplicated when server is booted up in Legacy Only mode. Logs do not contain anything particularly suspucious. Edit: No firewalls, routers, or VLANs in between the client and server. Edit 2: We have tried to Boot Guest into BIOS screen at Next Boot checkbox in the Guest Setting. Was not successful. Edit 3: 500GB datastore with 1 40GB VM on it. Plenty of space.

    Read the article

  • How do I migrate Exchange 2007 to new hardware?

    - by Graeme Donaldson
    As per my previous question, I have an Exchange 2007 box which is also a DC. Since I can't demote it while Exchange is installed, I want to move Exchange to a different server. Does anyone have any articles, tips or experiences to share on this? The last time I did this it was with Exchange 2003 and even that is a little rusty in my head. The setup is a single Exchange 2007 Hub/Edge/Mailbox/CAS server. Its currently on Windows Server 2008, I can migrate it to the same OS, or I can go to 2008 R2, I'm not really picky on that. We're running OWA/ActiveSync/POP3(S)/IMAP(S) for client access. I already have another fully functional DC/GC/DNS box in the same site and clients in the site are already using that for DNS. It's also the preferred site bridgehead for AD replication. Update: After reading Evan's answer I realised that my original question wasn't worded correctly. I'm not looking to do a swing migration, I actually need to move Exchange completely over to a new box. I have done swing migrations in the past, i.e. moving over to a temporary box and back to the original hardware afterwards, and I'm not really sure why I used that term in the original question since it's not what I intended. Any tips?

    Read the article

  • Cheap desktop computer in 19" rack-mountable form-factor?

    - by Alex Basson
    I'm a high school teacher at a small private school. As of this year, we have SMARTBoards in every classroom (though I've had one in the class I share for two years now). The classrooms themselves don't have computers in them, so we teachers bring our laptops to class and connect them to the boards. This has several disadvantages: This takes a few minutes while we wait for the board to boot up and then orient the board to our individual laptop -- we have to do this every time b/c different teachers have different laptops requiring different orientations. This isn't ideal because when you only have 43 minutes per class period, waiting five minutes just to get started is a real waste. Carrying your laptop to class doesn't sound so bad until you consider that we're also carrying textbooks and piles of student papers, and we're carrying it all through crowded high school hallways. More than one laptop has fallen THUNK to the floor, with dire consequences. We feel we could eliminate the need to use our laptops with the SMARTBoards if we had a dedicated computer in each classroom hooked up to the board at all times. Each board set-up is connected to a podium with a standard 19" rack in it, currently housing a power supply and DVD player. There're plenty of rack spaces available. So I'm thinking: maybe we could get some inexpensive computers in a 19" rack-mountable form factor, install them in the podiums, and connect them to the boards on a permanent basis. Any suggestions?

    Read the article

  • Mac SMB connections to Windows 2003 server, leaving Open Files

    - by Bruce Garlock
    We have several Mac clients (Both 10.5, and 10.6) mounting a share from a Windows 2003 server. At least once a day, our archivist will go into this share to archive items from it, to the backup server. Most of the time, she has no issues: she copies the folder to the archive server, when it's done, she deletes it from this share. Then, she will come upon one, and it will say she doesn't have permission. When I go into the Open sessions, it will say that a particular user has a READ lock on the file, in Windows 2003. Of course, this person does not have the file open, and the only way we can delete it, is to close the open session on the file. My thoughts: The Mac likes to "sprinkle" Hidden "Resource Forks" on SMB servers, and possibly, when this Mac who last wrote to that share, closes out of the file, and these files still exist. Windows 2003 has a bug, that doesn't properly "release" the OPLOCK on the file? Steve Ballmer just doesn't like Mac's, so he wants to annoy everyone by not releasing file locks :-) What can be done about this? It happens every day, and sometimes several times per day! Many thanks, Bruce

    Read the article

  • How to diagnose website performance/app pool recycling with Windows 2008/IIS7

    - by ilasno
    Ok, so there are various symptoms here (clients and and our own employees complaining of intermittent slowdowns, getting 'kicked out' to login page or just having a save request not properly save the submitted data). The environment: Windows Server 2008 (Datacenter), Service Pack 2, 64-bit, 2x2.8 GHz processors, 7.5 GB RAM MS SQL Server 2008 (running on the same machine) IIS 7 There are ~10 websites running on the server, each in their own application pool - most of these pools are running in Integrated mode, 2 are in Classic, all are on .NET 2.0 and all run as ApplicationPoolIdentity. I'm trying to analyze, diagnose, and troubleshoot and am struggling with where to get more info about what could be happening. Here are some steps i have already taken: Set each application pool to recycle once per day, and removed any other automatic recycling Set a Virtual Memory Limit for each to 1024000KB (1GB) Enabled ALL 'Generate Recycle Event Log Entry' entries (Config Changes, Isapi Reported Unhealthy, Manual Recycle, Private Memory Limit Exceeded, Regular Time Interval, Request Limit Exceeded, Specific Time, Virtual Memory Limit Exceeded) I have seen the app pool processes recycle (in Task Manager) - a new one will start up, and then the first one dies off - and this has happened without the memory or time going over the settings. This is a fairly new server, and most of these came from Windows Server 2003/IIS6. Any 'next steps' for setting up information gathering, logging, diagnosing, etc. would be much appreciated! j

    Read the article

  • when should be choose simple php mail and when smpt with loggin+password?

    - by user43353
    Hi, My Case: web application that need to send 1,000 messages per day to main gmail account. (Only need to send email, not need receive emails - email client) 1. option - use php mail function + sendmail + config php.ini php example: <?php $to = '[email protected]'; $subject = 'the subject'; $message = 'hello'; $headers = 'From: [email protected]' . "\r\n" . 'Reply-To: [email protected]' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); mail($to, $subject, $message, $headers); ?> php.ini config (ubuntu): sendmail_path = /usr/sbin/sendmail -t -i pros:don't need email account, easy to setup cons:? 2. option - use Zend_Mail + transport on smpt+ password auto php example(need include Zend_Mail classes): $config = array('auth' => 'login', 'username' => 'myusername', 'password' => 'password'); $transport = new Zend_Mail_Transport_Smtp('mail.server.com', $config); $mail = new Zend_Mail(); $mail->setBodyText('This is the text of the mail.'); $mail->setFrom('[email protected]', 'Some Sender'); $mail->addTo('[email protected]', 'Some Recipient'); $mail->setSubject('TestSubject'); $mail->send($transport); pros:? cons:? Questions: Can 1 option be filtered by gmail email server as spam? please can you add pros + cons to options above Thanks

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >