Search Results

Search found 14184 results on 568 pages for 'peter small'.

Page 426/568 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • Private staff network within public network

    - by pianohacker
    I'm the sysadmin at a small public library. Since I got here a few years ago, I've been trying to set up the network in a secure and simple way. Security is a little tricky; the staff and patron networks need to be separated, for security reasons. Even if I further isolated the public wireless, I'd still rather not trust the security of our public computers. However, the two networks also need to communicate; even if I set up enough VMs so they didn't share any servers, they need to use the same two printers at the very least. Currently, I'm solving this with some jerry-rigged commodity equipment. The patron network, linked together by switches, has a Windows server connected to it for DNS and DHCP and a DSL modem for a gateway. Also on the patron network is the WAN side of a Linksys router. This router is the "top" of the staff network, and has the same Windows server connected on a different port, providing DNS and DHCP, and another, faster DSL modem (separate connections are very useful, especially as we heavily depend on some cloud-hosted software). tl;dr: We have a public network, and a NATed staff network within it. My question is; is this really the best way to do this? The right equipment would likely make my job easier, but anything with more than four ports and even rudimentary management quickly becomes a heavy hit on our budget. (My original question was about an ungodly frustrating DHCP routing issue, but I thought I'd ask whether my network was broken rather than asking about the DHCP problem and being told my network was broken.)

    Read the article

  • Setting up Live @ EDU

    - by user73721
    [PROBLEM] Hello everyone. I have a small issue here. We are trying to get our exchange accounts for students only ported over from an exchange server 2003 to the Microsoft cloud services known as live @ EDU. The problem we are having is that in order to do this we need to install 2 pieces of software 1: OLSync 2: Microsoft Identity Life cycle Manager "Download the Galsync.msi here" the "Here" link takes you to a page that needs a login for an admin account for live @ EDU. That part works. However once logged in it redirects to a page that states: https://connect.microsoft.com/site185/Downloads/DownloadDetails.aspx?DownloadID=26407 Page Not Found The content that you requested cannot be found or you do not have permission to view it. If you believe you have reached this page in error, click the Help link at the top of the page to report the issue and include this ID in your e-mail: afa16bf4-3df0-437c-893a-8005f978c96c [WHAT I NEED] I need to download that file. Does anyone know of an alternative location for that installation file? I also need to obtain Identity Lifecycle Management (ILM) Server 2007, Feature Pack 1 (FP1). If anyone has any helpful information that would be fantastic! As well if anyone has completed a migration of account from a on site exchange 2003 server to the Microsoft Live @ EDU servers any general guidance would be helpful! Thanks in advance.

    Read the article

  • Ubuntu 10.04 freezing and Ctrl + Alt + Backspace does nothing but music keeps playing

    - by Bryce Thomas
    I'm having intermittent problems where the screen will freeze in Ubuntu. I've tried using Ctrl + Alt + Backspace to restart the X-server, though this does nothing. When the freeze occurs, there's a small square of black dashes around the mouse pointer - maybe 1 inch in size. These dashes look a lot like a 2d barcode. The rest of the screen looks normal, but I can't move the mouse and none of the keyboard shortcuts work to do anything. However, music that I begin playing before the freeze continues to play, which seems to indicate it hasn't stalled up completely. I've noticed a similar freezing problem when I'm using Windows 7. That is, I see the same barcode like dashes around the mouse pointer when it freezes up. So I'm guessing it's either a driver or hardware problem. I thought if it was a hardware problem though, the whole computer might stop working (i.e. music would stop playing)? The video card I am using is an Nvidia, and I believe it's in the 7600 range. In Ubuntu I have the drivers for the card set to the latest available (proprietary). Ideally I'd like to be able to continue using the proprietary drivers. Is there any known issues with the drivers for this model graphics card, or has anyone experienced the same problem and knows how to fix it?

    Read the article

  • HTTP Error: 413 Request Entity Too Large

    - by Torben Gundtofte-Bruun
    What I have: I have an iPhone app that sends HTTP POST requests (XML format) to a web service written in PHP. This is on a hosted virtual private server so I can edit httpd.conf and other files on the server, and restart Apache. The problem: The web service works perfectly as long as the request is not too large, but around 1MB is the limit. After that, the server responds with: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>413 Request Entity Too Large</title> </head><body> <h1>Request Entity Too Large</h1> The requested resource<br />/<br /> does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. </body></html> The web service writes its own log file, and I can see that small messages are processed fine. Larger messages are not logged at all so I guess that something in Apache rejects them before they even reach the web service? Things I've tried without success: (I've restarted Apache after every change. These steps are incremental.) hosting provider's web-based configuration panel: disable mod_security httpd.conf: LimitXMLRequestBody 0 and LimitRequestBody 0 httpd.conf: LimitXMLRequestBody 100000000 and LimitRequestBody 100000000 httpd.conf: SecRequestBodyLimit 100000000 At this stage, Apache's error.log contains a message: ModSecurity: Request body no files data length is larger than the configured limit (1048576) It looks like my step #4 didn't really take, which is consistent with step #1 but does not explain why mod_security appears to be active after all. What more can I try, to get the web service to receive large messages?

    Read the article

  • Can any postfix guru assist me determine how emails are still being sent via my server from unauthorized sources?

    - by Dave
    Hi all, I'm getting a little concerned as I run a small server hosting a number of websites and manage the email for a few dozen people. Just recently though I've had a couple of notifications from spamcop alerting me that spam has been sent from my server, and when I have a look over the logs from time to time I can indeed see that there are many repeated attempts of mail being sent from my server. Most of the time it gets knocked back from the destination servers but sometimes its getting through. Unfortunately I'm not linux or postfix expert, I can get by but had though I had my machine locked down quite securely, I don't allow relaying, when I check the online DNS/MX tools they tend to report my server as being OK so I'm not sure where to take it now and hoping someone might be able to throw me a few pointers. I get lots of entries like this in my MAIL.INFO log Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 66B88257C12F: from=<>, size=3116, nrcpt=1 (queue active) Jan 2 08:39:34 Debian-50-lenny-64-LAMP postfix/qmgr[15993]: 614C2257C1BC: from=<[email protected]>, size=2490, nrcpt=3 (queue active) and Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6471]: 0A316257C204: to=<[email protected]>, relay=none, delay=384387, delays=384384/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) Jan 7 16:09:37 Debian-50-lenny-64-LAMP postfix/error[6470]: 5848C257C20D: to=<[email protected]>, relay=none, delay=384373, delays=384370/3/0/0.01, dsn=4.0.0, status=deferred (delivery temporarily suspended: host mx.fakemx.net[46.4.35.23] refused to talk to me: 421 mx.fakemx.net Service Unavailable) then there tends to be connection timeouts, so from what I see even though I had relaying disabled.. something is getting by and trying to send.. So if you can help that will be greatly appreciated, and any further logging/config info I can supply. Thanks

    Read the article

  • Apache taking up a lot of CPU while running request-tracker4

    - by bhowmik
    I am trying out a request-tracker installation on an EC2 micro instance. The specs for the micro instance are as follows 1) Ubuntu 12.04 64bit, 613MB RAM, 8GB Hard Drive 2) Running request-tracker 4.0.4 from the repository, perl 5.14.2, Apache2, MySQL5 3) Request-tracker4.0.4 running with mod_perl2 and Worker mpm 4) Apache configured with Worker MPM. Config snippet given below Timeout 150 KeepAlive On MaxKeepAliveRequests 60 KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> Now when I start Apache2 it works fine for some time and after a while the CPU load shoots up to 99% or more. Usually it is one or more Apache processes doing this. I've tried a to modify the worker module configuration without any luck. The log files for both Apache2 and request-tracker4 are set to log debug messages and don't show anything to indicate what could be causing this. The system gets a maximum of 5 users at any given time and usually (90% of the time) it is just 2. I've just installed it and we only have 20 tickets in the database. I don't think its the memory thats causing the issue since the server isn't swapping or even close to it and I hardly see the memory usage go up. Would appreciate any pointers on how to go about troubleshooting this. In case it helps I've also tried this out a similar installation on a small instance (Identical settings except RAM bumped upto 1.7GB) and I still see the issue.

    Read the article

  • Active Directory + IIS + SQL + ASP.NET

    - by Amira Elsayed Ismail
    I have sent the following question to stackoverflow website I have installed Windows server 2008 r2 on a virtual machine, Can I install Active directory with domain controller + IIS + SQL server on the same machine? I want to make web application and this web application will authenticate users from Active Directory, the web application should be published on the server IIS and the users should access it remotely from their home using domain name of my machine, Someone tell me that its very wrong to have IIS and Active directory on the same machine I got the following Answer You can't use ActiveDirectory over the internet. At least not without something like a VPN as a middle man. Their home computers will not be joined to the domain, so there is no pass-through authentication. Yes, it's a bad idea to put AD on the web server. Why is too complex to get into in an answer here. Suffice it to say that even if you did do this, it's probably would not work the way you are thinking it should. It's not impossible to do this. For instance, many of the Microsoft "Small Businesss" products put IIS, AD, and SQL Server on the same server. But, you kind of have to know what you're doing to configure it securely. Then I add the following comment Thanks for ur reply.so what you think about the best way to do this as I didn't do anything like that before should I install active directory on a machine and IIS on another machine ? and what about SQL should I add it to the same server of active directory ? I didn't mentioned also that it will be Microsoft dynamics server that will access some information about work and i have to read data from axapta also ? also what is VPN and how can I use it to let users access my web application anywhere ? Sorry for my long questions and thanks in advance so please if anyone can help I will be thankful

    Read the article

  • Choosing the right e-mail client

    - by CFP
    Hi all, I'm currently using Outlook 2007 (under windows 7), but I much prefer free software (open source being the best of course), so I thought I'd ask for expert advice here. I thought it might be easier if I included a small "wanted list": I receive about 15 to 30 e-mails every day, but I have large archives (10'000 emails), which I frequently need to access. I usually open and close my mail program many times, so I'd like it to start pretty fast I cannot use an online mailbox, because I have too many email addresses (about 5: 1 for work, 1 for home, 1 semi-private, 1 for specific emails, and 1 for newletters By order of importance, the things I'd like my mail client to be able to: Efficiently categorize e-mails. Until now, I've mostly been using Outlook folders, because filtering by tags was not easy, but I'd rather one large list of mails, neatly tagged so I can easily filter. I'd love being able to select mails by tags (eg in a click or too (could be a tab) show all mails tagged with "software") Create "tagging rules", such as "if the mail was sent to this address, add this tag", or "if the body contains ..., add that tag" Sync contacts with Gmail, handle tasks (syncing with toodledo would be awesome), possibly provide a calendar Create e-mail templates, signatures... Other ideas: A timeline, scripting support, being able to import MS Outlook emails, provide a nice backup format... Thanks for sharing ideas and suggestions!

    Read the article

  • Maximum limit of filepointer in php reached and not changeable

    - by mlaug
    I have a server with the current 5.3.x version installed. Since we are running a really simple and small server in php using sockets, that connects to a lot clients using sockets we need to raise the open file limit that has been already done on the server for the user, that runs the server #ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 29879 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 8192 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 29879 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited and we compiled php with --enable-fd-setsize=8192 still we are getting [19-Nov-2012 09:24:23 Europe/Berlin] PHP Warning: socket_select(): You MUST recompile PHP with a larger value of FD_SETSIZE. It is set to 1024, but you have descriptors numbered at least as high as 1024. --enable-fd-setsize=2048 is recommended, but you may want to set it to equal the maximum number of open files supported by your system, in order to avoid seeing this error again at a later date. once in a while in our logs. Anyone knows who to configure the unix server and php correctly to have that working? I found a bug, but that is related to 2006 and marked as "not a bug" https://bugs.php.net/bug.php?id=37025&edit=1

    Read the article

  • Is UPS worthwhile for home equipment?

    - by Jon Skeet
    Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) One external hard disk still claiming to function, but corrupting data One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually running during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.

    Read the article

  • Disaster recovery backup of files/photos for personal use

    - by Renesis
    I'm looking for the best method to store a backup of important files and 5+ years of digital photos that is safe from some type of fire/flood disaster in my home. I'm looking for: Affordable: Less than $100/yr or first-time cost. Reliable: At least a smaller chance of failing than there is of fire or flood Easy for initial backup and to add to, and at least semi-easy to recover. I recently purchased a small home safe for physical vitals. It was inexpensive, solid, and is fire/water safe. If I had a physical copy of the digital files, the safe would work fine for this, but I don't know what to store in it that adequately meets the requirements above. Hard drive - I read that the danger of it not spinning up makes a hard drive a bad choice for this type of storage, although it was my first thought and would definitely be the simplest choice - very easy to take out once a month and add files to. DVDs - Way too much of a hassle for both backup and restore. Tape - No idea on the affordability of this option Online - Given that I have at least 300GB already and ever-increasing megapixels means ever-bigger files, and my ISP upload is about 2Mb at the best, this just doesn't sound like a good option for me, but I could be convinced. Other - Have I missed something? Also, I'm already covered both for sync between computers (Dropbox) and a nightly backup of these files (External HDD). The problem with the nightly backup is obviously that it's always with the computer and in a disaster would be destroyed along with it. Is anyone else doing something similar? Is the HDD as poor of a choice as I read, or is it a feasible option? Maybe two to reduce the likelihood of failure?

    Read the article

  • mysql INNODB inserts very slow

    - by 133794m3r
    The database's schema is as follows. CREATE TABLE `items` ( `id` mediumint( 8 ) unsigned NOT NULL AUTO_INCREMENT , `name` varchar( 45 ) NOT NULL , `main_type` tinyint( 4 ) NOT NULL , `rarity` tinyint( 4 ) NOT NULL , `stack_size` smallint( 6 ) NOT NULL , `sub_type` tinyint( 4 ) NOT NULL , `cost` mediumint( 8 ) unsigned NOT NULL , `ilvl` smallint( 6 ) unsigned NOT NULL DEFAULT '0', `flavor_text` varchar( 250 ) NOT NULL , `rlvl` tinyint( 3 ) unsigned NOT NULL , `final` tinyint( 4 ) NOT NULL DEFAULT '0', PRIMARY KEY ( `id` ) ) ENGINE = InnoDB DEFAULT CHARSET = ascii; Now, doing an insert on this table takes 0.22 seconds. I don't know why it's taking so long to do a single row insert. Reads are really really fast something like 0.005 seconds. With using the example configuration from here dev mysql innodb it averages ~0.002 to ~0.005 seconds. Why it takes more than 100x more time to do a single insert makes no sense to me. My computer is as follows. OS:Debian Sid x86-x64, Mysql 5.1, RAM:4GB ddr2, cpu 2.0Ghz dual core, HDD 7200RPM 32MB cache 640GB. Why it's taking almost 100x as much time for a SELECT * FROM items; vs INSERT INTO items ...; will never make any sense to me. It's still a small table at only 70 rows, and took that long even when it had 0 rows.

    Read the article

  • SSH freeze when UFW is enabled

    - by Cristian Vrabie
    I have a small Ubuntu 10.10 server and i recently noticed a weird behavior (not sure if it was happening before). If I have ufw enabled (with default deny all in, allow all out, allow all http, allow all on a random port i use for ssh) when i perform some actions in a ssh sesion, the ssh console completely freezes. The server continues to work and if i close the console i can start another ssh session. This happens no matter from where I log in (tried from another ubuntu and a mac). The actions are fairly reproducible, for example vim some config files (though vim-ing other files works), cat some other file, etc. The freeze never happens if ufw is disabled. Any idea what's going on? Thanks! Cristian Addition: if you're wondering, yes, I have TcpKeepAlive on yes and I doubt is related (it would happen with ufw disabled too) As requested: my ufw conf below. Also, i don't know if it has something to do but the server has 2 ips. On one is configured the ssh domain, and on one to serve hhtp (via apache2) Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 19922/tcp ALLOW IN Anywhere 9418/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443/tcp ALLOW IN Anywhere

    Read the article

  • How can I change the default location/action of 'Open Outlook Data File' in Outlook 2010?

    - by Chadddada
    I have recently deployed a Remote Desktop Host server that functions as a remote Microsoft Office 2010 work space for users. In part of the locking down of this server I have installed all programs on the D: drive and, through the use of Group Policy, hidden all the drives on the server from standard users. In addition to hiding these drives I am not allowing users to save anything locally (on the server) or open Libraries. However one of the functions of the server is to provide the Outlook client. Often users will have the .PST file stored on a network location and want to open this in Outlook. Can I change the default action or location that File Open Open Outlook Data File looks or tries to pull the file from? The default location seems to be under Users / Libraries. When click 'Open' you get a warning: This operation has been cancelled due to restrictions in effect on this computer. Clicking OK drops the user into a small menu that shows attached network drives under Computer. Can I instead have the 'Open' click drop the users in a defined network drive or just open computer and allow them to select a share? I don't want them to see the error message. A solution that looks to have been used for Office 2000/03 is: Key: HKEY_CURRENT_USER\Software\Microsoft\Office\<version>\Outlook Value name: ForceOSTPath Value type: REG_EXPAND_SZ Value: path to your storage folder I am not sure if there is a better way to do this now OR if this even works with Office 2010.

    Read the article

  • Linux - real-world hardware RAID controller tuning (scsi and cciss)

    - by ewwhite
    Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator, recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler blockdev --setra 65536 /dev/cciss/c0d0 echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios.

    Read the article

  • Can't Connect w/ SQL Management Studio After Domain Change

    - by Sam
    Our old Small Business Server 2003 (acting as our domain controller) was on the fritz, so we replaced it with a new Windows Server 2008 box and set the server up as our new domain controller. In hindsight, it may have been a mistake, but we set up the new server as a replacement and tried to keep as much the same as possible, including the DOMAIN name. The problem was, that even though the domain name was the same, the guest computers somehow still realized it was not the exact same domain. We had to unjoin and rejoin the domain and port over everyone's documents and settings. This morning, when I attempted to connect to my local SQL Server Instance, it was saying that my login failed. When I tried to use the SQL Management Studio, it throws the error "Package 'Microsoft SQL Management Studio Package' failed to load" on startup, then exits without giving me a chance to change the login. I am using Mixed Authentication and have an administrative account as a backup. Ideas? If there is a more appropriate stack, please let me know where to put it.

    Read the article

  • How can I display additional boot and shutdown information on the Windows 7 welcome screen?

    - by Daniel Saner
    There is a small tweak, I believe it is a registry key, that allows to display additional information on the Welcome and Shutting down screens of Windows 7 (and most likely Vista, too). I have activated this tweak on one of my systems; unfortunately I forgot how I did it, and I can't seem to find the website that originally gave me that information. Usually, the Windows 7 welcome screen will just display "Welcome" when logging in. With the tweak activated, my Welcome screen gives status information such as "Loading user settings" or "Preparing desktop". When shutting down, the default screen simply says "Shutting down". With the tweak activated, it gives additional status information such as "Stopping Windows services". This appears the same way that Windows gives information when updates are installed or configured during the startup or shutdown procedure, and I find them quite helpful in getting a feel for what task takes how long during that process. The only setting I was able to find is the Boot log checkbox on the Boot tab of the msconfig application. However, this results in Windows displaying console logs of drivers it is loading, etc., instead of the animated Windows title. This is NOT the setting I am looking for. The "additional boot information" setting that I have activated on this system still displays the regular animated Windows logo, and only replaces the strings displayed on the blue Welcome and Shutdown screens. Could someone direct me to the registry key (or whatever setting) that is used to get this behaviour? Edit: Here are a few pictures of the enhanced Welcome and Shutdown screens taken with my mobile phone—they're in German though. Login screens "Waiting for User Profile Service" and "Preparing desktop": Logout screen "Stopping Windows services":

    Read the article

  • Have it fixed or buy a new one?

    - by Workshop Alex
    My dual-monitor system has just become a single-monitor system again when the older monitor decided it would be nice to just turn to black. It's a Samsung LCD monitor and is over three years old. Not sure if the warranty is still valid but I just wonder what option would me more efficient: 1) Have the monitor fixed for a small amount. 2) Buy a new monitor for a slightly bigger amount. When monitors were still expensive, I wouldn't doubt about this and would just have my monitor repaired. But prices are so low nowadays, (and repairs are expensive) that I wonder if it's worth the trouble... Of course, I'm in no hurry since I still have another monitor. It's just that I liked the dual-monitor setup. Solved! Just ordered a new monitor. A Samsumg Syncmaster T260HD 25,5". Much more than it would cost me if I just had my old one repaired but I noticed that this one has a build-in TV tuner, plus speakers. It's way more expensive than a repair, but it's worth the additional value it provides.

    Read the article

  • What is the replacement of the floppy

    - by alexanderpas
    While CD (and to an lesser extend DVD) disks have reached the price-point of the floppy, they have one significant downside, it is WORM (Write-Once Read-Many) media, allowing it to be used only one single time, and you need to be explicit in writing the data to the actual media (you need to burn it.) While CD-RW solves the "use only once" problem, it is still EWORM (Erasable Write-Once Read-Many) media, which still means you need to be explicit in writing the data to the actual media (you still need to burn it.), and also, you still need to be very explicit in erasing it. (simple delete is not possible.) Okay, we can use a CD-RW in Packet Writing mode, however the downside to that, is that this mode is not very universal, and also, not the native mode of the media. Now, while USB-sticks and SD-cards may not have the poblems of the CD, they have a whole other kind of problem: their PRICE! USB-sticks and SD cards are generally 10 to 100 times as expensive as diskettes per piece. SD-cards, in addition have an added problem, because they need a reader to operate. While it is a very standard thing, it is not default equipment on the computer like the CD drive or USB port (or historically the diskette drive). You wouldn't give out an USB stick or SD card with a 100 kB text file, not caring weither you would get it back or not. So, to recap: CD & DVD are basically WORM media. SD cards and USB sticks are relatively expensive. SD cards also needs special readers. Diskettes have a very low data-rate Diskettes have a very low storage capacity. Now, is there a media out there that solves all these problems, or is there a way to get (very) small USB sticks or SD cards for a very low price (as they're the closest thing to diskette).

    Read the article

  • Why does hiberfil.sys come back from the dead on Windows 7?

    - by Corey White
    I have Windows 7 running on a small (40GB) partition, with 4GB ram. This means that the hiberfil.sys file created by Hibernate takes up a significant portion of the available diskspace. I would like to remove it. I am aware that I can disable Hibernate and remove hiberfil.sys by entering powercfg -h off in an elevated command prompt. This works -- the file is immediately removed, and after doing so, the HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Power\HibernateEnabled key is (correctly) set to 0. However, the next time I reboot the PC, hiberfil.sys returns from the dead, Hibernate is reenabled, and that registry key has returned to 1. I'm pretty much at my wits' end with this. Almost everything I can find online related to removing the hiberfil.sys file simply suggests using powercfg to turn off hibernation, and that appears to work for just about everyone. But it just keeps coming back for me! (Like a vampire, sucking up my disk space.) I did find one other thread from someone who seems to have had the same issue, but none of the suggestions there worked for the original poster (or for me). Still, I have tried everything listed there, including: Disabling hybrid sleep Disabling Hibernate through the command prompt, through the Power Options GUI, and through both (in both orders) Manually changing the HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Power\HibernateEnabled key Pretty much everything else I can think of! I do want to reiterate that I have no problem removing the file -- that works great. It just comes back after every reboot. I'm about ready to throw in the towel and just run a script on login to disable Hibernate each time, even though that seems like a crazily hacky "solution" . . . but I was hoping someone here could suggest something else, first. Thanks!

    Read the article

  • APC (php accelerator). What situations should I use this?

    - by matthewsteiner
    So I've just got a small vps. I've installed apc, which sped up normal pages by 20% - 30%. I was reading about memcached and came to the conclusion that I can use apc for the same thing (caching objects from database results) if I'm not distributing over other servers. Since I only have the one server, apc will be just as beneficial for caching things in memory. I'm still in development mode, and I'm sure it's hard to tell what would be best for production mode. The thing is, my database queries seem pretty fast (between .0008 and .02). None of my pages are way database intensive. Would it be beneficial to me to cache results in memory? If the database is running well right now, is it going to be having a hard time later? Also, is connecting to the database at all something that costs speed (even if I cache most of my queries, every page has to have a little database interaction for session data). So, basically if I have a limited ram, and one machine, will using apc rather than just letting the database be uncached be much faster? Ideas?

    Read the article

  • Uploading file > 1 MB on Django admin gives 400 Bad Request response.

    - by ayaz
    I have a small Django (1.2.x) project deployed on Apache (2.x) via mod_wsgi (2.x). In the admin, if I upload a file < 1MB, I can get it through; however, for a file, say, 1.2MB in size, I get a 400 response from the server with "Error 400" in the body only. I am wondering why this is happening. As far as I can see, there is no LimitRequestBody set in Apache configuration. I have tried uploading with several browsers including: Firefox, Chrome, and Safari. In the log file for Apache, there is apparently no entry for requests that gave the 400 error response. This is strange. I should point out that the scenario where this is happening is thus: The project in question is deployed on two identical Apache servers (completely identical setup) that are behind a load balancer. On my development setup, of course, the problem does not surface. Any help with this will be very much appreciated.

    Read the article

  • Using a Level 2 switch as a core switch

    - by imtech
    I have a small user base of about 20 people on at a time and spiking up to about 80 people during peak times. Most people (80+%) are connected over our Aruba managed wireless system. We have a Windows Domain. We have 3 24-Port switches all connecting back to a central 48-port switch where additional access ports, firewall, servers, and wireless controller all centrally connect back to. It's a flat network with dumb switches. I'm in the process of upgrading our infrastructure. Cisco pricing for switches is pretty high for us so I've been looking at HP Procurves which seem to be within our budget range. I want to eventually make use of 802.1x, SNMP, QoS for possible VOIP upgrades, VLAN to separate guest VLAN from authenticated users, and other more advanced features. PoE would be nice but that's probably too expensive for us. I was thinking of having our core switch be a Procurve 2610 and the rest of our switches that centrally connect to it be Procurve 2510s. A true and full blown level 3 switch is way out of our price range but a 2610 seems to be good enough for us. The 2610 does static routing which ought to be good enough for us but I'm in unfamiliar territory so I'm looking for any gotchas. Also, should all the switches be 2610s or just the core switch? Do I even need the 2610, can I just go with all 2510s? I'm new to VLANs as well so I'm not sure what it is I need but I would like an affordable infrastructure that won't need replacing 2-3 years down the line because I choose a product that was lacking.

    Read the article

  • Server 2012, Jumbo Frames - should I expect problems?

    - by TomTom
    Ok, this sound might stupid - but is there any negative on just enabling jumbo frames in practice? From what I understand: Any switch or ethernet adapter that sees a jumbo frame it can not handle will just drop it. TCP is not a problem as max frame size is negotiated in the setinuo phase. UCP is a theoretical problem as a server may just send a LARGE UDP packet that gets dropped on the way. Practically though, as UDP is packet based, I do not really think any software WOULD send a UDP packet larger than 1500 bytes net without app level configuration changes - at least this is how I do my programming, as it is quite hard to get a decent MTU size for that without testing yourself, so you fall back in programming to max 1500 packets. The network in question is a standard small business network - we upgraded now from a non managed 24 port switch to a 52 port switch with 4 10g ports (netgear - quite cheap) and will mov a file server to 10g for also ISCSI serving. All my equipment on the Ethernet level can handle minimum 9000 bytes and due to local firewalls I really want to get packets larger (less firewall processing), but the network is also NAT'ed to the internet. On top, different machines move around (download) large files (multi gigabyte area) quite often for processing. The question is - can I expect problems when I just enable jumbo frames? Again, this is not totally ignorance - I just don't see programs sending more than 1500 byte UDP packets (if that is a practical problem please tell me) and for TCP the MTU is negotiated anyway. if there is a problem I can move to a dedicated VLAN, but this has it's own shares of problems as basically most workstations must then be on both VLAN's.

    Read the article

  • Is it a good idea to run Redmine using Webrick through Nginx?

    - by Rohit
    The task here is to get Redmine setup for a small (<20) team. There may be a few users who would access the setup as business clients. I am familiar with setting up PHP for Apache, and recently, Nginx. I am not familiar with Ruby, Ruby-On-Rails, etc. I prefer to use the OS's (Ubuntu Linux LTS) package manager to install the different components as it takes care of dependencies and updates. I have setup Nginx with PHP-FPM successfully and am struggling with Redmine. As suggested here, I got Redmine running on port 3000. # /etc/init/redmine.conf # Redmine description "Redmine" start on runlevel [2345] stop on runlevel [!2345] expect daemon exec ruby /usr/share/redmine/script/server webrick -e production -b 0.0.0.0 -d And using the Nginx config on this page, I used Nginx to proxy requests to Webrick. server { listen 80; server_name myredmine.example.com; location / { proxy_pass http://127.0.0.1:3000; } } This works well locally. I wanted some opinions before trying this out on the live box (a 256 MB VPS). Further, should I use something like monit to monitor webrick for failure?

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >