Search Results

Search found 22803 results on 913 pages for 'customer support sr'.

Page 789/913 | < Previous Page | 785 786 787 788 789 790 791 792 793 794 795 796  | Next Page >

  • Dumping active directory

    - by Nop at NaDa
    I work in the IT support department of a branch of a huge company. I have to take care of a database with all the users, computers, etc. I'm trying to find a way to automatically update the database as much as possible, but the IT infrastructure guys doesn't give me enough privileges to use Active Directory in order to dump the users, nor they have the time to give me the information that I need. Some days ago I found Active Directory explorer from Sysinternals that allows me to browse through Active Directory, and I found all the information that I need there (username, real name, date when it was created, privileges, company, etc.). Unfortunately I'm unable to export the data to a human readable format. I'm just able to take a snapshot of the whole database in a machine-readable format. Doing the snapshot takes hours and I'm afraid that the infrastructure guys won't like me doing entire snapshots on a regular basis. Do you know of any tool (command-line is preferable) that would allow me to retrieve the values of the keys or export it to XML, CSV, etc?

    Read the article

  • How do I back up Hyper-V VMs with Windows Server backup on Windows Server 2008 R2?

    - by Chris
    I've searched this site and google, and I CAN find information about how to back up Hyper-V virtual machines by using Windows Server Backup from the Hyper-V host in Windows Server 2008. You have to set up a registry key to enable the Hyper-V VSS writer, and then you can take online backups of your VMs. However, all the information I have found is about a year old, and none of it has been updated for Windows Server 2008 R2. I tried to run the "FixIt" .msi found here: http://support.microsoft.com/kb/958662 ... but it said that it was not applicable to my operating system. So I am thinking either Windows Server 2008 R2 already has its VSS service for Hyper-V enabled, or it still needs to be enabled but the FixIt package doesn't feel comfortable operating on an OS that wasn't RTM at the time. I went ahead and scheduled a windows server backup for 9pm tomorrow. It said it would take 86 GB, which means it MUST be counting those VMs. But will this backup fail? Can anyone confirm whether you have to apply the same registry changes for R2?

    Read the article

  • What needs to be considered when setting up for Linux Development? [closed]

    - by user123586
    I want to set up a box for Linux development. I have a working linux install with the usual toolchain and an IDE. I'm looking for advice on how to approach structuring accounts and folders for development. As the Perl folks say "There's always more than one way to do it." Left to my own devices, I'll come up with several unproductive ways of doing it before figuring out what an experienced Linux programmer would think obvious. I'm not looking for instructions to follow for a specific set of tools or a specific software package. Instead, I'm looking for insight into what decisions need to be made and how to make them, with understanding of the advantages and disadvantages of each individual choice. These are some of the questions that come up: Where to put sources Where to put built object files and libraries where to install what to set in environment variables what compiler flags matter and how do you manage them across several types of builds what configuration entries to make in an IDE how to manage libraries to support multiple environments how to handle different build versions such as debug vs release, or cross platform builds If you are an experienced Linux developer, the answers to these questions may seem trivial and obvious. I'd like to learn to make decisions about these questions that result in as little manual configuration as possible, given some existing sources, a particular IDE, or no IDE at all, a paticular set of development libraries etc. At this point you're probably thinking: Can you be more specific? Sure. But remember that I'm trying to learn how to think about this stuff, not just follow a recipie for a specific set of results. Example: Setup a project that uses CMake for some of its components, autogen.sh followed by configure for others and just configure for a few more: debug builds without an IDE debug builds in NetBeans debug builds in Eclipse debug build in Visual Studio all of the above with release builds for Linux, Mac and Windows. So... **What are your thoughts on an approach that works for all four? Do you have any advice on what to read?**

    Read the article

  • cheap gigabit switch for small business

    - by neoice
    my friend's business is currently borrowing my Adtran 1224R and is very happy with it. it's configured with a few VLANs to segment customers, internal traffic and public wifi. port 1 is a "trunk" port to the router, a chunky Linux box with iptables+NAT. they push a lot of traffic over the LAN (data backups) and really need gigabit. besides, I'd like my Adtran back :P my goal is to find a cheap(ish) switch that can function as a drop-in replacement. it looks like VLAN trunking is actually part of the 802.1q spec, so anything with VLAN support should cover the current trunk-to-router setup. it's nice to have both a web interface and SSH, but I can configure it either way if needed. things like the Netgear GS724T have caught my eye, but it seems like none of the hardware in the $300-500 range have really solid reviews. I'm concerned that "cheaper" hardware might not work for a network full of power users. does anyone have a recommendation for the Netgear GS724T or a switch that will meet my needs?

    Read the article

  • Is this DVD drive broken? Brand new, i need help convincing

    - by acidzombie24
    I am asking bc i know dell is going to give me a problem. How do i know if my DVD is broken on my laptop? i burnt 4 DL disc and they ALL failed, i called and dell suggested roxio. I used it and burnt 1 disc without error and the 2nd disc with an error. With both apps there were no 'problems' during the burning process only failed on the verification process. Some of these bad disc dont work on other PCs and one locks up windows when i click a specific file. Does that sound like a broken burner to you guys? when i called dell they told me since it can read disc properly 100% of the time and software doesnt fail in the burning process its not a broken drive _. They forward me to software support who demand a fee (i think $100) to help me fix my software. I am annoyed bc i dont want to be on the phone for them to watch me burn a dvd and since i burned it once correctly i dont want to happen to burn correctly again to have them say they solved my problem (doing nothing) and charge me refusing to refund. -edit- The errors i got were 1) the request could not be performed because of an I/O device error 2) Windows locking up when opening 1 specific file 3) Cannot copy : Data error (crc) NOTE: the file that causes the problems are random every disc

    Read the article

  • Single application through OpenVPN tunnel (Debian Lenny)

    - by user14124
    I'm using Debian Lenny and I want to tunnel rtorrent only through a OpenVPN tunnel. I have a tunnel running, the config file looks like this: client dev tun proto udp remote openvpn.xxx.com 1194 resolv-retry infinite nobind persist-key persist-tun ca /etc/openvpn/xxx/keys/ca.crt cert /etc/openvpn/xxx/keys/client.crt key /etc/openvpn/xxx/keys/client.key tls-auth /etc/openvpn/xxx/keys/tls.key 1 ns-cert-type server comp-lzo verb 3 auth-user-pass script-security 3 reneg-sec 0 My idea is that I could run a sockd proxy internally that redirects traffic to the openvpn tunnel. I could use the *nix "proxifier" application "tsocks" to make it possible for rtorrent to connect through that proxy (as rtorrent doesn't support proxies). I have trouble configuring sockd as my IP inside the VPN changes every time I connect. This is a config file someone said would help: http://ircpimps.org/sockd.conf As my IP changes at each connect I don't know what to put in that config file. I have no control over the host side config file. Any help wanted. Any other method is very welcome.

    Read the article

  • Samba access works with IP address only

    - by Sebastian Rittau
    I added a Debian etch host (hostname: webserver, IP address: 192.168.101.2) running Samba to a Windows network with a Windows 2003 PDC (IP address 192.168.101.3). The Samba server exports a public guest share, called "Intranet". The server shows up fine in the network, but trying to click on it produces an error dialog, stating I don't have the necessary permissions. So does entering \webserver manually and using \webserver\internet states that the path does not exist. Interestingly, accessing the share by IP address (\192.168.101.2 or \192.168.101.2\intranet) works fine. DNS is configured correctly, and "smbclient //webserver/intranet" on another Linux client works fine. One complicating issue is that the webserver is only a VMware virtual machine running on PDC server. Here is our smb.conf: [global] workgroup = Foobar server string = Webserver wins support = yes ; commenting out these wins server = 192.168.101.3 ; two lines has no effect dns proxy = no guest account = nobody [... snipped some unrelated bits, like logging ...] security = share [... snipped some password-related things ...] domain master = no [intranet] comment = Intranet path = /srv/webserver/contents browseable = yes guest ok = yes guest only = yes read only = yes create mask = 0775 directory mask = 0775

    Read the article

  • HP DL380 G3 2U For Basic Web Server in 2012

    - by ryandlf
    I have an opportunity to pick up a used HP DL380 G3 2U for $100. I'm looking for a basic entry level web server that I can host a small - medium size website on and more or less learn the ins and outs of running my own web server before I bite the bullet and spend a couple grand on a server. The specs are: Dual (2) Intel Xeon 2.4GHz 400MHz 512KB Cache 4GB PC2100 ECC Registered Memory 6 x 72GB 10K U320 SCSI Hard Drives Smart Array 5i RAID Controller Redundant Power Supplies DVD/Floppy, Dual Intel GB NIC's, USB Or would I be better off spending a couple hundred bucks on something like: this new HP Seems like the only major difference is SATA and a bit of storage, but I will likely be implementing a separate storage system of some sort anyways. I guess it also wouldn't hurt to mention that I plan on running a linux server distro, so would the hardware be likely to support linux with a system that is 4 generations old? I don't mind spending a couple hundred extra dollars if its a better solution, but as mentioned previously I am simple looking for a server to learn on and probably use for a year or so while I put together a small - medium size website.

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • Is it logical that file system acls would be corrupted in a way that adds permission for another user?

    - by wilbbe01
    I was having issues on a shared hosting provider with the host's web server instance not serving some files. I asked the companies support about the issue and they responded with the results of getfacl on my home directory, and added the necessary line to allow their web server to obtain the necessary permissions. All is working happily now, but I noticed a line in the getfacl that was for what appeared to be another username to which I had no relation. I asked them about this and their response was that it was likely some minor corruption and that I could remove the unwanted line with the setfacl -x option. I know I never added the user to my home directory, and I also find it weird that that could truly happen due to corruption. So now that it is fixed I'm a little bit weary of whether or not they were trying to cover up a problem they accidentally gave someone permissions to my account, or if this kind of thing can really be corrupted in that way. Especially when that user is a real user on the same server. Any thoughts? Thanks.

    Read the article

  • Silent install of Japanese Language Pack in Win7

    - by Doltknuckle
    Every year, due to re-imaging, I am forced to find a way to install the Japanese language pack on a collection of 30 computers. Each year I look for a way to automate this process, and each year I am forced to do this manually. Maybe this year will be different. Has anyone had any luck with installing and configuring far east language support for windows 7 without user interaction? I have already downloaded kb972813 and have a way to get it out to the computers. What I normally do is this: Run the EXE, use the default settings. Open up language settings and create the JP keyboard. Configure the language bar settings. Copy settings to default user. Delete the local user cache. Sign the different user accounts in to make sure that the default settings are correct. This whole process takes about 10 minutes, multiply that out by 30 machines and you are looking at a 5 hour process. If I can log into all of the computers at once, I can normally cut that down to about an hour. Any ideas would be appreciated. Thanks in advance

    Read the article

  • VPN with VLANs? [closed]

    - by Craig
    As usual, I'm sure I'm in way over my head on this one. My networking skills are limited; so, bear with me if you will. What I have are a few testing servers at my house as well as at a friends house that I want to link together so they can see each other (VPN right? I've done those before). We want to be able to see all the servers and work with them from either location. All the servers also need to be able to see each other. But, we don't want to see each others PCs, printers, PS3s etc. How do we pull that trick off? Multiple VLAN?... subnets?... what? If hardware matters, I have an old PC I was planning on loading pfSense onto because my current el-cheapo router doesn't support VPN. The VPN linking the houses is about the only thing I'm sure on. Beyond that, I'm lost. I'm not a complete noob; but, like I said, I'm not so sharp with the more complex networking. I do however read well... So use lots of descriptive words and feel free to link away to long dry articles if necessary. :-)

    Read the article

  • Outbound ports to allow through firewall

    - by dunxd
    This question was asked before, but in a rather general way. I'm asking more specifically based on my current requirements. We have a number of remote offices made up of a bunch of PCs and an ASA 5505 which is used as firewall and VPN termination point. In the offices we share the internet connection with one or more other organisations over whom we have very little control, asides from the config on the ASAs. For a bunch of reasons I'd like to lock down these ASA 5505s to only allow outbound traffic to ports used by applications we know we need. I'm putting a standard config to roll out to all the ASAs, and if we need to open up ports for the other orgs we can do it on request. But I want to leave open the most commonly required ports so we can get up and running without waiting on other folks technical staff to get back. I plan to allow the following TCP ports to support commonly required resources: POP3 (110 and 995) HTTP (80 and 443) IMAP4 (143 and 993) SMTP (25 and and 465) The question really is, what other ports do I need to leave open to allow for "normal" working. I've seen UDP port 53 for DNS as one. Are there any others that would be worth opening up? Just to note - I'll also be setting up monitoring systems to keep an eye on the ports we do allow. Any of the above could be misused of course. We'll also back all this up with signed agreements. But I'm aiming for a technical solutions where I don't have to start out with the full requirements of everyone we share connections with. See also: outbound ports that are always open

    Read the article

  • Boot stuck at blinking cursor before GRUB - only works via BIOS boot menu

    - by delta1
    I have a new box running Debian Squeeze. Grub is installed on /dev/sda, but when booting up I just get a blinking cursor, before the Grub menu. I can only boot to grub successfully when I choose boot options (during post) and select that specific drive! I have made sure the correct drive is set to boot first in the BIOS. So Grub works, but the system won't boot to that drive automatically? Any ideas on what could cause this? Drives sda/b/c are all 2TB (sda runs the system with b/c as raid device md0) with the following partitions: $ cat /proc/partitions major minor #blocks name 8 0 1953514584 sda 8 1 977 sda1 8 2 9765625 sda2 8 3 6445313 sda3 8 4 1937302627 sda4 8 32 1953514584 sdc 8 16 1953514584 sdb 9 0 1953513424 md0 but # fdisk -l /dev/sda gives WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 243202 1953514583+ ee GPT Any insight into this strange behaviour would be greatly appreciated.

    Read the article

  • query keepalived

    - by tdimmig
    *Note: I have trouble deciding what should go in serverfault and what should go in superuser, if some kindly admin decides this is in the wrong place please move it for me - many thanks. I am implementing a basic HA system with keepalived. I only want to be notified of the failover in the case of hardware failure. I do, however, have the servers switch roles periodically. I have a track_script running on the backup that will vary it's return between 0 and 1 on an interval (once a week, once a month, whatever). Upon returning 0, the priority is raised above that of the master, upon returning 1 the priority is lowered again. This way they trade places on the configured interval. The question: What can I do to tell the difference between a switch caused by my script, and a switch caused because one of the servers died? I certainly want to be notified when there is an actual problem, but not every time the servers change places because of the script. I see that version 1.2.7 has snmp support and I may be able to use it to get some information that could tell me one way or another, but to be honest I've never used snmp before and I don't know how to get the information I want with it (my Google foo failed me).

    Read the article

  • Using multiple USB webcams in Linux

    - by rachelderp
    Running more than one USB webcam in Debian/Linux results in the the following error: libv4l2: error turning on stream: No space left on device VIDIOC_STREAMON: No space left on device What initially seemed to be a programming issue in OpenCV turned into a quest for a mysterious hardware/software problem after the same errors were produced by running cheese and xawtv. Apparently it's caused by webcams requesting all the available bandwidth on the USB host controller. With that in mind I decided to run wireshark and capinfos to see just how much bandwidth a single camera used. 4 megabits per second at 320x240 14 megabits per second at 640x480 32 megabits per second at 1920x1080 Interesting! That might explain why two cameras at 320x240 work but any higher resolution fails. It's as if my USB controller is only operating at USB 1 speeds, yet lsusb shows both webcams belonging to a device which supposedly supports 480 megabits per second. One solution proposed forcing the webcams to calculate their bandwidth usage instead of requesting their maximum by running the following commands: sudo rmmod uvcvideo sudo modprobe uvcvideo quirks=128 Unfortunately that made no difference, so I decided to try another solution. A post on StackOverflow suggested telling my webcams to use a lower FPS or compressed video format like MJPEG, but after running v4lctl list it doesn't appear either of my webcams support changing their video mode. And that's where I'm stuck. Why would two webcams operating well below the maximum speed of USB 2 would produce this error? ps: It's not a disk space issue, df displays no change when the webcams are started. pps: If it makes a difference, here's the output of lsusb

    Read the article

  • What is the typical maximum number of database connections for Oracle running on Windows server ?

    - by Sake
    We are maintaining a database server that serve a large number of clients. Each client typically running serveral client-applications. The total number of connections to the database server (Oracle 9i) is reaching 800 connections on peak load. The windows 2003 server is starting to run out of memory. We are now planning to move to 64bit Windows in order to gain higher memory capability. As a developer I suggest moving to multi-tier architecture with conneciton pooling, which I believe is a natural solution to this problem. However, in order to support my idea, I want the information on: what exactly is the typical number of connections allowed for Oracle database ? What is the problem when the number connections is too high ? Too much memory comsumption ? or too many sockets opened ? or too many context switching between threads ? To be a little bit specific, how could Oracle Forms application scale to thousand of users without facing this problem ? Shall Oracle RAC applied to this case ? I'm sure the answer to this question should depend on quite a number of factors, like the exact spec of the hardware being used. I'm expecting a rough estimation or some experience from the real world.

    Read the article

  • Varnish going sick

    - by junke1990
    I'm having trouble with Varnish, it works for a couple of views and then just goes sick... The weird thing is that it does work for about 20 or 30 requests. If I call apache directly it works fine. I'm running Varnish Version: 3.0.3-1 on Debian Squeeze and, for now, Apache on port 80 and Varnish on port 8080 on the same server.. I'm using https://github.com/mattiasgeniar/varnish-3.0-configuration-templates as base for my VCLs and modified the VCLs to support Concrete5. Anyone any clue on how I should debug this? backend default { .host = "127.0.0.1"; .port = "80"; .connect_timeout = 1.5s; .first_byte_timeout = 45s; .between_bytes_timeout = 30s; .probe = { .url = "/"; .timeout = 1s; .interval = 10s; .window = 10; .threshold = 8; } } LOG 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791312 1.0 0 CLI - Rd ping 0 CLI - Wr 200 19 PONG 1353791315 1.0 0 Backend_health - default Still sick 4--X-R- 0 8 10 0.000689 0.000000 HTTP/1.1 301 Moved Permanently (the 301 is because I check for www.)

    Read the article

  • How do I serve Ruby on Rails applications on Windows Server 2008?

    - by Adam Lassek
    I have spent the last several hours attempting to get Ruby on Rails running on a Windows server with no luck. At first I tried configuring a test application through IIS7's FastCGI support, but the documentation for this is not very good. I've been following this blog entry, and this one, and this one, and this one but everything seems to be missing major steps, or are out of date. And every article keeps linking back to this Howto from rubyonrails.org that doesn't exist. The sense that I'm getting is that even if I manage to make this work, IIS' FastCGI isn't good enough to use in a production environment anyway. So it looks like my best bet is to setup a reverse proxy in IIS that points to Apache & Mongrel/Passenger using ARR and UrlRewrite. Is there anybody else out there stuck deploying a Rails application on a Windows stack? Am I on the right track? Can you give me a better idea of how to configure this? I believe Plesk already installed an instance of Apache/Tomcat running on this server using a different port, so adding another virtual host shouldn't be difficult; the hardest part seems to be setting up the reverse proxy through IIS.

    Read the article

  • Unified inbox shows twice on Thunderbird

    - by That Help Vampire Guy
    I'm using Thunderbird 24. If I show folders in Unified mode, my inbox folder shows up twice. If I choose the "All" folders mode, I see only one inbox. The issue started when I was using Ubuntu 12.04, but now I'm on Fedora 19. (I have migrated the folders on /home). I do remember having it not-duplicated, but then it started while still on Ubuntu. I noticed it when using the Converation plugin, but I had previously used the plugin without it happening. I have disabled the plugin and it persists. What I have tried If I close Thunderbird, rename the .thunderbird folder on my /home to something else, then it will create a new config profile, I have to set up everything again, and then it works as expected, see images below: Before resetting Unified vs All Folders After resetting Unified vs All Folders (I'm trying to avoid resetting the profile and creating a fresh new one, because the server -- MS Exchange -- doesn't support IMAP labels, so I'd lose all the tags on my messages, and I have organized it based on tags instead of folder).

    Read the article

  • Games on windows 8 in bootcamp lag even on lowest graphics

    - by Jackson Gariety
    I've been playing Crysis 2 and Skyrim on my Retina MacBookPro (10,1) for months now. The two games used to run super smoothly even on nearly maxed out settings. This laptop has an Nvidia GeForce GT 650M graphics card inside, it runs great. But I recently replaced my Windows 8 consumer preview with the retail copy, and since then, 3D games lag in this odd way, no matter what the graphics settings. Every second Skyrim and Crysis alternates between running smoothly and lagging. It's a cyclical lag that comes and goes like clockwork. I can turn the graphics down to 800x600 with no antialiasing and low texture quality, and it runs much smoother on the "up" motion of the cycle, but every second it moves back into this lag spike. I've tried installing beta graphics drivers, re installing the operating system, re installing the bootcamp support software, and freeing up space (I have about 20 GB free). I can't figure out what suddenly caused this other than some obscure difference between the consumer preview and the retail version. What can I try? Is my video card failing? Are there some other drivers I can install? This isn't normal lag from maxing out the card, it

    Read the article

  • apache access and error log written in same file

    - by user196075
    i have issue that access and error log are written in same file ! , the configuration in virtualhosts.conf as the following : <VirtualHost *:80> ServerName ************ ServerAdmin support@************8 DocumentRoot /var/www/html/*********.com ErrorLog /var/log/httpd/********/********.com_error_log CustomLog /var/log/httpd/********/********.com_access_log combined <Directory /var/www/html/***********.com> Options -Indexes FollowSymLinks AllowOverride All </Directory> </VirtualHost> as you see from the configuration each access and error logs should be save separately , but both logs are written in *.com_access_log , i have double check all permission , group and owner ... can't find anything wrong previous error in log file : [Thu Sep 19 14:15:02 2013] [error] [client 192.168.10.54] client denied by server configuration: /var/www/html/**********/show_has_offers.php i have tried to generate same error , i can find the hit in access log only as the following : 192.168.10.75 - - [24/Oct/2013:08:11:14 +0000] "GET /show_has_offers.php HTTP/1.1" 404 1586 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:22.0) Gecko/20100101 Firefox/22.0" 0 17332 and nothing in error log !! Please advice ...

    Read the article

  • Partitioning of Ubuntu server which will use OpenVZ and encrypted partitions (unlocked through SSH l

    - by DeletedAccount
    Hi, I'm about to install a server. Some context: My HDD is 1 TB and I have 2 GB RAM Ubuntu Server Lucid Lynx AMD 64 I will use OpenVZ and have most functionality separated into containers. To support disk quotas I need to use ext3 (not ext4) for the container partition. Each time I reboot the server I want to be forced to login through SSH and mount the encrypted partitions by typing my password (if someone steals the server, no critical data should be available). I want to have as much as possible encrypted. Yet I want to be able to login through SSH as I don't have a monitor or keyboard at the server. I am not sure how big I need my partitions to be. Being able to resize them later would be nice. I guess it implies using LVM? But the manual partition mount using SSH is also very important (in fact it's more important, if I have to pick one). How do you recommend that I partition the HDD? If I have daemons which needs the encrypted partitions, will they fail and can I just restart them after mounting the needed partitions?

    Read the article

  • Is there an simple but good To Do Manager app for the Mac?

    - by Another Registered User
    Every morning I think about what I am going to do today. So I take a paper and start to write things like: [ ] Call Mr. XYZ [ ] Answer Support E-Mails [ ] Reduce website header height by 20 px [ ] Create new navigation bar icons And every time I'm done with something, I paint a checkmark in this square. On paper. It would be fun to have something like this as an application. But I don't want a heavy project management tool or integration with email. It should be like download, install, use without fat configuration and steep learning curve. usually I don't schedule my to do's, I just write down every day what I want to accomplish today. For my experience it doesn't make sense to plan what to do next week, because next week everything looks totally different. Would be cool if such a simple utility exists. At the moment I try just using textEdit and deleting rows which are done. With a nice interface, this would be much more fun.

    Read the article

  • ffmpeg - creating DNxHD MFX files with alphas

    - by Hugh
    I'm struggling with something in FFMpeg at the moment... I'm trying to make DNxHD 1080p/24, 36Mb/s MXF files from a sequence of PNG files. My current command-line is: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mxf -pix_fmt rgb32 -b 36Mb /tmp/temp.mxf To which ffmpeg gives me the output: Input #0, image2, from '/tmp/temp.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mxf, to '/tmp/temp.mxf': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 [mxf @ 0x1005800]unsupported video frame rate Could not write header for output file #0 (incorrect codec parameters ?) There are a few things in here that concern me: The output stream is insisting on being yuv422p, which doesn't support alpha. 24fps is an unsupported video frame rate? I've tried 23.976 too, and get the same thing. I then tried the same thing, but writing to a quicktime (still DNxHD, though) with: ffmpeg -y -f image2 -i /tmp/temp.%04d.png -s 1920x1080 -r 24 -vcodec dnxhd -f mov -pix_fmt rgb32 -b 36Mb /tmp/temp.mov This gives me the output: Input #0, image2, from '/tmp/1274263259.28098.%04d.png': Duration: 00:00:01.60, start: 0.000000, bitrate: N/A Stream #0.0: Video: png, rgb32, 1920x1080, 25 tbr, 25 tbn, 25 tbc Output #0, mov, to '/tmp/1274263259.28098.mov': Stream #0.0: Video: dnxhd, yuv422p, 1920x1080, q=2-31, 36000 kb/s, 90k tbn, 24 tbc Stream mapping: Stream #0.0 -> #0.0 Press [q] to stop encoding frame= 39 fps= 9 q=1.0 Lsize= 7177kB time=1.62 bitrate=36180.8kbits/s video:7176kB audio:0kB global headers:0kB muxing overhead 0.013636% Which obviously works, to a certain extent, but still has the issue of being yuv422p, and therefore losing the alpha. If I'm going to QuickTime, then I can get what I need using Shake, but my main aim here is to be able to generate .mxf files. Any thoughts? Thanks

    Read the article

< Previous Page | 785 786 787 788 789 790 791 792 793 794 795 796  | Next Page >