Search Results

Search found 30724 results on 1229 pages for 'backup solution'.

Page 210/1229 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Multiple Devices connecting to VPN on CentOS server

    - by jfreak53
    I am looking for a solution as to what would be the VPN software for multiple OSes and Devices. I currently have 15 systems to connect to a VPN. I was using Hamachi from LogMeIn but their lack of Android support really upsets me, and their limited support for Linux OSes is also a let down. 90% of my systems are Ubuntu 11+ systems, only 2 are Windows XP. But I also have a few people, maybe 3 that need to connect to it from Android devices. This is where Hamachi has let me down and I want to move to my own VPN solution. The server would be a simple VPS running CentOS. So I need some VPN software that allows connection of those to a Linux based server. I wanted to go with OpenVPN, but I am under the opinion that in any OS you have to have their software to connect to the VPN. Ubuntu supports VPN's out of the gate, but OpenVPN requires extra software to be installed, I don't want this if I can help it. Same with Windows and same with Android. Plus android mostly requires rooted devices for OpenVPN, at least from what I've read. I was looking at maybe L2TP, but I'm not sure how easy it is to get Ubu systems connected with it as I haven't found much on the subject, let alone Window's XP machines. I know Android connects out of the gate to it. I don't know much about L2TP but I know it's a pain to get running in CentOS from what I have read. Now the last option is some sort of software for PPTP but I've never read anything on it and don't know if all systems are compatible with it. What would be your solution to these devices and multiple OSes? OpenVPN seems to be my heading I just don't like it that it always requires software to run and rooted Android Devices. Any solutions for this and install solutions? Maybe a different OS for the server like Ubuntu would make another type of VPN easier?

    Read the article

  • How to connect a USB GDI printer to Linux over a D-Link print server?

    - by jpe
    The setup is the following: +------------+ +-----------------+ +---------+ | HP LJ P1005|--USB--| D-Link DPR-1020 |---LAN---| PC Linux| +------------+ +-----------------+ + +---------+ | +------------+ +--| PC Windows | +------------+ HP LJ P1005 is one of those GDI printers that requires the printer driver to do most of the work for it and therefore is a bit "special". D-Link DPR-1020 is a print server with an Ethernet and an USB port that actually supports printing to challenged (read GDI) printers using a utility called PS-Link. What the utility does is basically mirror a USB port over the network to the print server so that the printer driver and the printer both are happy to talk to each other. The PC-s are notebooks that come and go, i.e. are not there all the time. Is there an equivalent of the D-Link PS-Link utility for Linux that could mirror a USB port over the network for a Linux host? And can the solution be used with D-Link DPR-1020? If not then I basically wasted the money buying the print server because the goal was to share a small printer among a couple of users with diverse operating systems in an office. The print server specs say that it supports Linux and LJ P1005, but the Catch 22 appears to be the solution used for GDI printers... It should be noted that it is possible to print from Linux to LJ P1005 directly over USB. This far sharing involved reconnecting the USB cable to appropriate computer to print. Now one of the desks is separated, so the cable does not work. Searching the net did not yield anything useful. Please do not suggest solutions involving a Windows machine (either virtual or not), my question is whether a solution only involving a Linux machine exists.

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • Just a few questions about Hyper-V virtual machines and clustering

    - by René Kåbis
    I have been using Microsoft’s Hyper-V technology for a little while now, but I am just now dipping my toe into clustering. In particular, I am trying to implement a fault-tolerant SQL DB. This involves setting up two VMs, clustering them via Failover Cluster, and then installing SQL Server in some fashion. I have two physical machines - one high-end and rather beefy “heavy lifter” to contain the majority of the VMs, and another “backup” (a repurposed desktop) to hold the essential “secondary” (or failover) AD-DC, SQL and FS VMs. The main reason why I find the failover cluster at the VM level so attractive is that it presents a single IP and DNS entry to the network as a whole - if one machine (physical or virtual) goes down, you might loose some ping and the connections get reset, but the network applications (Microsoft RMS connection to backend SQL) can still connect to a viable DB without having to mess around with the settings at all. My first question is in terms of SQL Server itself. If I have a cluster between two VMs, does it make more sense to install the SQL Server in Failover Cluster configuration or should I simply install it in a stand-alone config and mirror the DBs? For example, this post suggests just mirroring the DBs, but do I just mirror standalone DBs on standalone VMs, or can I get the network and failover benefits of clustered VMs while still utilizing (on each clustered VM) standalone DBs that have been mirrored between each other? As well, I have come across a lot of documentation about SQL clustering, but most assume a number (#2) of physical machines to hold not only the actual SQL VMs but also the Quorum and Witness stores. I will not be able to muster more than two physical machines. As such, I will have to be satisfied with a VM cluster that does not exceed two VMs (one for each physical machine). Another issue involves MSDTC - the Distributed Transaction Coordinator. When attempting to install the SQL Failover Cluster (I never completed it for this reason) it threw a hissy fit because MSDTC had not been clustered. Search as I might, I have not yet found a way to do so under Windows Server 2012 R2. I have found plenty of docs for Windows 2008 and 2008 R2, but these instructions don’t align with 2012 R2 (at least, not in a way that allows me to successfully cluster MSDTC). Plus, some of the instructions that I have found for SQL Server Failover Cluster installation suggest that a third “network device” - shared network storage (a SAN) - is required for the DB itself (and other functionality). I do not have this, and won’t be getting this. Most of my storage exists on the “heavy lifter” that was designed for all of the “primary” VMs. If that physical machine goes down, so does the storage. The secondary server does have enough resources for an AD-DC Server, an SQL server and a File Server, so it will handle the “secondary” failover versions of those VMs (clustered or not). My final question involves file servers. If I cluster file servers between two VMs (one on my “heavy lifter” and another on my “backup”, how do I mirror the data between them? Clustering VMs only provides a single point of access on the network for a resource, it doesn’t exactly replicate data between the two - that is left to the services that serve up that data. I am unsure how I can ensure that file server data between two clustered file server VMs can be properly mirrored. Remember, I only have two devices to be used here - my primary machine and a backup secondary. There is no chance of me obtaining a SAN or any other type of network attached storage. What exists on the machines must act as the storage. Thanks in advance for any suggestions.

    Read the article

  • A lots of Apache processes are using my CPU uses always more than 70%

    - by Barkat Ullah
    I am running a plesk panel in 1and1. I have 120 sites running and all are using pligg cms, each site has 600 visitors per day. Please see the details of my server below: HDD-1000GB RAM-16GB Processor-6 Core I always see a lot of apache processes running in my # top view, so the server seems overloaded. If I can reduce the amount apache processes I think the server will be ok. But I don't know why too many apache processes are running. Please see the link below for the screenshot of my # top view: http://dl.dropbox.com/u/26967109/%23Top-2.jpg Sometimes I saw too many connection error in my plesk control panel, so I added the below line in my [mysqld] section: set-variable=max_connections=416 But I didn't find a solution yet. I have also added maxclients and serverlimit 416 in the config /etc/httpd/conf/httpd.conf But no solution yet. I am researching around more than 7 days but don't get any solution. Please help me to solve the problem. In peak hours my sites are taking too much time to load, but off-peak hour it is ok. Please help me to find out the actual problem.

    Read the article

  • Exchange 2003: recover mailbox when migrated to 2007

    - by lyngsie
    We have migrated almost all of our mailboxes from Exchange 2003 to 2007 so both are still up and co-existing in our domain. One mailbox is missing some mails, and I tried to recover from an Exchange 2003 backup using ExMerge as usual, but it throws an error: Error opening message store (MSEMS). Verify that the Microsoft Exchange Information Store service is running and that you have the correct permissions to log on. (0x8004011d) This should be normal when a mailbox has been migrated to 2007 and you try to recover a 2003 backup as stated here: http://www.petri.co.il/restoring-exchange-2003-mailboxes-exchange-2007-exmerge.htm We then tried the solution given in the article (copying the value "homemdb" for the user and pasting this in "msExchOrigMDB" for the Recovery Storage Group object in adsiedit). The error unfortuntely persists. So my question is, how can I extract a .pst from a mailbox in the recovery storage group when both Exchange 2003 and 2007 are running? I assume this is why the solution in the article didn't work. If no solution is possible using regular tools (exmerge, eseutil, etc.) which third-party tool should we choose - preferably the cheapest as we only need this one mailbox recovered.

    Read the article

  • Does a VPN requirement kill the concept of having a Web Application in the Cloud?

    - by Christian
    Recently I posted a question in SO, but so far I got no answers. I wonder if I'm asking the wrong question. This is the problem: We need to design an application which offers a public http web service, but at the same time it must consume some services through a VPN connection from other existing company. There is no other alternative but to use a VPN connection to access those services. We want to host our application in some cloud infrastructure like Heroku or Amazon EC2. But there is no direct way to access the VPN services of the other company from there. The solution I'm thinking, but I don't like is to have a different server to expose the services from that VPN. But this will require the setup of another server which I prefer to avoid. In the case this is the solution, can I use an Amazon EC2 instance to connect to a VPN? This is what I was thinking, is it correct? I don't have experience using VPNs, tunnels or those kind of networking stuff. I will really appreciate if you can propose me an alternative solution, or just give me a comment.

    Read the article

  • Is there any way to distribute x264 encoding jobs across multiple computers (to increase the encoding speed)?

    - by Breakthrough
    Does anyone know of a current, active solution to encoding x264 videos across many computers (via the network) to increase encoding FPS? Brownie points for cross-platform and open source, but just so you all know, I usually use Windows. Programs that I have heard of, and why I do not believe they are suitable: x264farm: Not actively developed. Good interface, but does not support two-pass encoding, and fails with newer x264 builds. ELDER: Again, not actively developed, but my issue was that it didn't work with new x264 builds, and it was very difficult to configure (read: randomly stopped working). While I don't absolutely need a program which is being actively developed, I would like one that supports two-pass encoding, and works with new(er) x264 builds. Additional information: So far, I've offered (and awarded!) two separate bounties on this question since I first posted it over two years ago, and I still haven't found a solution to this problem. What I'm looking for basically is a simple program to allow me to encode x264 videos using the processing power of multiple computers connected over a LAN. Furthermore, it would be nice if it worked with new(er) x264 builds, and supported two-pass encoding. If at any time someone has an updated answer, or a new solution to this problem, please post it and it will be given some consideration.

    Read the article

  • How to change .htaccess file to work right in localhost?

    - by Manolo Salsas
    I have this snippet code in my .htaccess file to prevent users from hotlinking the server's images: RewriteEngine On RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^http://(www.)?itransformer.es/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://itransformer.es [R,L] Of course, it is not working in my localhost, but don't know how to achieve it. My guess is that I should change the domain name with any wildcard. Any idea? Update I've finally found out the answer thanks to @Chris solution: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} ^https?://%{HTTP_HOST}/.*/usuarios/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://%{HTTP_HOST} [R=301,L] The /usuarios/ directory is because I only want to deny direct access to files inside this directory. Update2 For some reason, it doesn't work again. Finally I think that I found out a better solution: RewriteCond %{REQUEST_FILENAME} .*/usuarios/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://%{HTTP_HOST} [R=301,L] I say better solution because what I want to deny is direct access to a file (image). Update3 Well, after a while I discovered above wasn't exactly what I wanted, so the next is definitive: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^https?://itransformer.*$ [NC] RewriteRule /usuarios/.*\.(gif|jpe?g|png|wbmp)$ - [R=404,L] Just two doubts: If I change the above to: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^https?://%{HTTP_HOST}.*$ [NC] RewriteRule /usuarios/.*\.(gif|jpe?g|png|wbmp)$ - [R=404,L] it doesn't work. I don't understand why, because %{HTTP_HOST} is equal to itransformer in my localhost, and it should work. The second doubt is why is shown the default 404 page and not my custom page (that is shown in all other 404 responses).

    Read the article

  • nginx redirect proxy

    - by andrew
    I have a web app running on a nginx server on local ip 192.168.0.30:80 I have this in my etc/hosts 127.0.0.1 w.myapp.in If someone accesses my app using a "w" subdomain, it shows a webdav interface, otherwise it runs normally (for example, someone calls http://myapp.in , it goes into the app, and http://w.myapp.in goes into webdav interface - this is done within the app, nginx has nothing to do with it) Because I don't have a dns or anything like that, users must access the app by ip. A problem appears if someone wants to access the webdav interface, because you cannot access the app by a subdomain - unless you write a line in your local hosts file, which is not a solution) A possible solution If it's possible to setup the nginx server so that if someone calls http://192.168.0.30 (on port 80), it goes normally into the app, but if a user tries to access say http://192.168.0.30:81 (another defined port) it redirects internally to w.myapp.in, and the app sees the subdomain Given the app, can this be done? If yes, what should I put in the nginx config file? And if you guys think of a better solution, I'm open to any.

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • Software/hardware to build video streaming server?

    - by Sasha Yanovets
    I am looking for a video streaming server solution, something like online TV server, with ability to make live broadcasts in the internet. What software could you recommend for that? What kind of hardware it should run on, should be there anything special? I am looking for a solution that could be scaled up to at least 1000 simultaneous users online with good resolution of video. I think it is good to have general answer on what direction to choose. But here more details on my specific case: I just looking for a solution almost from scratch. We have some video content that we've produced, but it is not delivered over internet yet. We do not tied to any particular vendor for now. We want to make 24 hours of steaming three 8 hour blocks with change of content every day. We want the ability to make regular live broadcasts. I guess we will need to have several options of streaming quality (low ~56 kb/s mid ~273 kb/s). Some terms just foreign to me (like play-truncation rate), if you could point out what parameters we should avare of, it would be great. Uplink to the internet is to be determined. We plan to start from something and scale up on the way. If you are already have some kind of media streaming server, just describe its configuration here (hardware, OS, software), peak number of concurrent users it serves. I think it could help people approaching this task.

    Read the article

  • Reverse proxy for mailserver (SMTP + HTTP for web client)

    - by ba
    I'm looking at doing some reverse proxy work for a mail server with corresponding web client. Both servers are running on the same machine, this is not a server with a high load. :) The solution I've discussed with friends is having the mail server/web client on our internal network. Then to put a reverse proxy on the DMZ to service both SMTP and web client HTTP-traffic to the mail server on the internal network. From what I understand this is the recommended secure solution? So far I've thought for the SMTP-proxy part of using postfix which will receive mail, do some spamhause and similar anti-spam measures and if it all checks out, send the mail to the mail server on the inside. The mail server on the inside will send all outgoing mail to the proxy which will then send it out on the Internet. For the web client I'm not sure exactly which software I should be running on the proxy machine, I've been thinking about using Squid -- but that's basically based on the fact that I know squid is a http proxy. The web client data will be sent out over SSL. Reading around some here on Serverfault I've seen other people using Apache with mod_proxy+mod_security for similar situations. Am I thinking correctly for this solution? What software would you guys use and with which modules? Thanks in advance for the help! :)

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • Make a drive from one machine appear as a physical disk in another machine.

    - by Roberto Sebestyen
    I want to take a physical disk (or part of a disk) in one machine (call it machine-A) and I want to make it available in another machine (machine-B). But I don't want to map a network drive. I want it to appear in machine-B as a physical drive. Even though it is not a physical drive. The reason I want to do this is i want the ability to create shares in machine-B on that drive. Since I cannot do that on mapped drives, I need to use some utility that fools machine-B to think that it is a physical drive, and treat it as such. Both of these machines are windows server 2003. I heard about NFS, It sounds like what could be the solution to my problem. But isn't that a Linux/Unix protocol? What tools can I use to make this happen? Are there any open source solutions? I don't care what the solution is, as long as it achieves the end result, preferably open source solution though. Thanks for reading guys and gals!

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit Results of DBBC CheckDB: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 7928, Level 16, State 1, Line 1 The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline. Msg 5030, Level 16, State 12, Line 1 The database could not be exclusively locked to perform the operation. Msg 7926, Level 16, State 1, Line 1 Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details. Msg 823, Level 24, State 3, Line 1 The operating system returned error 1(error not found) to SQL Server during a write at offset 0x00000674706000 in file 'G:\AX40_Dynamics_Live.mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

    Read the article

  • Vim: auto-comment in new line

    - by padde
    Vim automatically inserts a comment when I start a new line from a commented out line, because I have set formatoptions=tcroql. For example (cursor is *): // this is a comment* and after hitting <Enter> (insert mode) or o (normal mode) i am left with: // this is a comment // * This feature is very handy when writing long multi-line comments, but often I just want a single line comment. Now if I want to end the comment series I have several options: hit <Esc>S hit <BS> three times Both of these afford three keystrokes, taken together with the <Enter> this means four keystrokes for a new line, which I think is too much. Ideally, I would like to just hit <Enter> a second time to be left with: // this is a comment * It is important that the solution will also work with different indentation levels, i.e. int main(void) { // this is a comment* } hit <Enter> int main(void) { // this is a comment // * } hit <Enter> int main(void) { // this is a comment * } I think I have seen this feature in some text editor a few years ago but I do not recall which one it was. Is anyone aware of a solution that will do this for me in Vim? Pointers in the right direction on how to roll my own solution are also very welcome.

    Read the article

  • DNAT from localhost (127.0.0.1)

    - by pts
    I'd like to set up a TCP DNAT from 127.0.0.1, port 4242 to 11.22.33.44, port 5353 on Linux 3.x (currently 3.2.52, but I can upgrade if needed). It looks like the simple DNAT rule setup doesn't work, telnet 127.0.0.1 4242 hangs for a minute in Trying 127.0.0.1..., and then it times out. Maybe it's because the kernel is discarding the returning packets (e.g. SYN+ACK), because it considers them Martian. I don't need an explanation why the simple solution doesn't work, I need a solution, even if it's complicated (e.g. it involves creating may rules). I could set up a usual DNAT from another local IP address, outside the 127.0.0.0/8 network, but now I need 127.0.0.1 as the destination address. I know that I can set up a user-level port forwarding process, but now I need a solution which can be set up using iptables and doesn't need helper processes. I was googling for this for an hour. It was asked multiple times, but I couldn't find any working solutions. Also there are many questions about DNAT to 127.0.0.1, but I don't need that, I need the opposite.

    Read the article

  • why does my dl380 G3 crank the fan up so high (and how do i stop it?)

    - by smoofra
    I've got a HP DL380 and after I leave it on for a while it decides it needs to run the fans at a much higher speed than it did before. It's really loud and annoying. Is there any way to manually control the fan speed? Or otherwise get it to stop doing that? Thanks. edit: I guess i should have made it clear that this is a bit of a jury-rigged situation. I know the ideal solution is to put the thing in a server room with nice cool air. Unfortunately, that isn't happening. This is the sort of problem that calls for a jury rigged solution like manually setting the fan speed and accepting the fact that it's going to run hot shutting down a CPU (can i do that?) spinning down the disks when they're not in use (is that possible?) solution: Install HP's system health monitoring daemon, hpasmd. I installed it to try to figure out what was going on, and just running it fixed the problem.

    Read the article

  • VM automatic provisioning advice

    - by jdgregson
    In my lab we have 24 workstations, each with five technician-maintained virtual machines set up in VMware Workstation. These provide a lot of management overhead, as we have to update them as well as the host operating systems every three months (the start of the next quarter), which adds up to 144 systems to update instead of just 24. Whenever we need to reimage the hosts, the VMs add another 130GB to each image, which is over 3TB of extra data to send over the network, and a lot more time to apply each image, and then we still have to boot all 120 VMs and assign them a unique IP Address and host names. We would like to get the VMs off the hosts and onto a server, but after looking around for a few days, I still don't know where to begin looking for a solution. There may be a better way to do this, but in my mind, the ideal solution would be to replace the VMs on the host machines with five Thin Client operating systems, each configured to connect to a server and be sent or connected to a unique virtual machine. We can't have 120 VMs running on the server all the time, so the server would have to create a copy of the VM from a template whenever a student tries to boot one, and destroy the VM after the student is finished with it. If there is another client application that has to be installed on the hosts that would be fine, the only reason I'd like to keep them in VMware Workstation is because students already know to look there for the VMs when they need to use them. What, if any, virtualization software will allow this? Is there some other solution I'm not seeing?

    Read the article

  • Find kth smallest element in a binary search tree in Optimum way

    - by Bragaadeesh
    Hi, I need to find the kth smallest element in the binary search tree without using any static/global variable. How to achieve it efficiently? The solution that I have in my mind is doing the operation in O(n), the worst case since I am planning to do an inorder traversal of the entire tree. But deep down I feel that I am not using the BST property here. Is my assumptive solution correct or is there a better one available ?

    Read the article

  • npoi export from datatable

    - by Iulian
    I have an asp.net website that will generate some excel files with 7-8 sheets of data. The best solution so far seems to be NPOI, this can create excel files without installing excel on the server, and has a nice API simillar to the excel interop. However i can't find a way to dump an entire datatable in excel similar to CopyFromRecordset Any tips on how to do that , or a better solution than NPOI ?

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >