Search Results

Search found 12834 results on 514 pages for 'small wolf'.

Page 375/514 | < Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >

  • Legal IT documents

    - by TylerShads
    I have been wondering this past week because my big boss told me to start keeping track of all the things I have fixed, how to fix them, etc. Which is reasonable and have been doing anyway. But then a related question came to mind. What kind of documentation should I have on hand as far as users go. More specifically I am talking in terms of EULA, ToC, etc (correct me please if I'm using the wrong terms) Or more specifically a policy, so to speak, for the users and such. Can't say I'm a legal expert, otherwise I'd be a lawyer. The environment the users are in is pretty laid back so I don't forsee a problem. But assume that there should ever arise a problem, what should I have written up/have on hand? EDIT: I really should have noted that we are a medical transport facility and have patient records so I know that something must be done there to comply with HIPAA policies I believe. I do like what anthonysomerset said about the "If I get by a bus" Scenario and want to apply it not only to the documentation I am currently writing but also for if say an employee were to steal info from the server or edge cases, theft, etc. As far as our staff, its relatively small as in a single HR person, no legal department aside from the 2 owners' lawyers and me being the only IT person on staff with a guy who is no more than a mac superuser.

    Read the article

  • Block SMTP session with sender domain which doesn't itself accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the declared sender's domain has an MX which refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. Note that I'm not, as some commenters have assumed, talking about checking whether the SMTP client will receive messages. The check I want is whether the declared sender's domain (regardless of who the current SMTP client is) will accept SMTP connections from here. In other words: when we get around to sending a message back, we'll need the sender's domain to accept SMTP connections; I want to do that check before accepting the incoming session. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • compile ntp without ssl

    - by Zulakis
    I need to deploy ntp to a very space-critical pxe-imaging-system. (Yes, each KB matters.) Footprint needs to be as small as possible, so I want to compile ntp without linking openssl. According to the manual this is should be possible: If available, the OpenSSL library from http://www.openssl.org is used to support public key cryptography. The library must be built and installed prior to building NTP. The procedures for doing that are included in the OpenSSL documentation. The library is found during the normal NTP configure phase and the interface routines compiled automatically. Only the libcrypto.a library file and openssl header files are needed. If the library is not available or disabled, this step is not required. I already tried out ./configure --without-openssl however, this didn't help. This is my ldd output: ldd ntpd/ntpd linux-gate.so.1 => (0xb7706000) libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb76d5000) libcrypto.so.0.9.8 => /usr/lib/i686/cmov/libcrypto.so.0.9.8 (0xb7582000) librt.so.1 => /lib/i686/cmov/librt.so.1 (0xb7578000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb741d000) /lib/ld-linux.so.2 (0xb7707000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7419000) libz.so.1 => /usr/lib/libz.so.1 (0xb7404000) libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xb73eb000) The system I am compiling on is 32-bit debian lenny using openssl 0.9.8g-15+lenny16. What is the correct configure option to compile ntp without openssl?

    Read the article

  • w2k3 AD DC Demotion fails with "no other AD DC for that domain can be contacted"

    - by Kstro21
    i've a small office with a single w2k3 sp2 DC(bad idea, but it is real), now, i want to make a clean install of that pc, so, i got another one, install w2k3 sp2, add it to the domain, dcpromo and set it to be a GC, untill now everything is ok, then tried to dcpromo in the primary DC, but it fails with The box indicating that this domain controller is the last controller for the domain mydomain.com is unchecked. However, no other Active Directory domain controllers for that domain can be contacted. Do you wish to proceed anyway? If you click Yes, any Active Directory changes that have been made on this domain controller will be lost. So, i started to move all the roles to the new server as described here, when all was ok with the roles, i tried doing the same, but got the same result. Tried moving the DNS to the new server, but it doesn't make difference. Shutdown to the old server, then tried to log into a workstation, but it fails saying the domain is not available, also coudln't add new workstation to the domain, so i have to power on the old server again. So, if i successfully move all the roles and dns to the new server: why dcpromo give such message in the old server? why if i shutdown the old server the domain is not available?? if i successfully move all the roles and dns to the new server, and i click yes when dcpromo give warning in the old server, will i lose all users, computers, ou, etc.? am i missing some steps to make this work?? hope you can help me thanks

    Read the article

  • TC hashing filters - single rule deletion

    - by exa
    For traffic shaping I'm currently using a setup that looks exactly like the setup from LARTC, on this page: http://lartc.org/howto/lartc.adv-filter.hashing.html I have a simple problem with that - everytime I want to modify something in the hash table (like assign a IP to different flowid), I need to delete the whole filter table and add it again filter by filter. (I actually don't do it by hand, I have a nice program that does it for me... but still...) There is a problem - I got roughly 10k filters allocated this way and deleting and refilling the whole filtertable can get pretty lengthy, which is not exactly good for traffic shaping. My program could easily manage to delete only the rules that need to be deleted (thus reducing the whole problem to several commands and miliseconds), but I simply don't know the command that deletes only the one hashing rule. My tc filter show: filter parent 1: protocol ip pref 1 u32 filter parent 1: protocol ip pref 1 u32 fh 2: ht divisor 256 filter parent 1: protocol ip pref 1 u32 fh 2:a:800 order 2048 key ht 2 bkt a flowid 1:101 match 0a0a0a0a/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 2:c:800 order 2048 key ht 2 bkt c flowid 1:102 match 0a0a0a0c/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 link 2: match 00000000/00000000 at 16 hash mask 000000ff at 16 The wish: 'tc filter del ...' command that removes only one specific filter (for example the 0a0a0a0a IP match (IP address 10.10.10.10)). Removal of some small subgroup would also be good - for example I could still recreate a bucket (bkt a) pretty fast. My attempts: I tried to number all the filters using prio, but with no help -- they just create something unusuable (but deletable) below, but the bucketed filters remain there after that gets deleted. Any ideas? edit - I'm adding a simplified tl;dr description of the problem: I created hash filter on some interfce just like in this http://lartc.org/howto/lartc.adv-filter.hashing.html I want to find a command that deletes one rule (e.g. 1.2.1.123) from the table, leaving the rest untouched and working.

    Read the article

  • Connection to Google, Yahoo, Bing, Ask, etc. compromised via all devices on my home network - How?

    - by jt0dd
    I'm a very computer savvy guy (although not very networking savvy), and I may still be wrong about this, but I think my home network may be compromised somehow. I'd like to know if it's possible for someone to have hijacked my network's connection to Google.com and other popular websites. Update: The issue seems to take effect with all popular websites. I can connect to small (non-popular) websites without issue, but Facebook, Google, Yahoo, and Bing cannot be accessed by any device on my home network. On all devices using my home network, I'm being shown http://www.google.com WARNING! Internet Explorer is currently out of date. Please update to continue. when I attempt to connect to google.com. I wouldn't be surprised by this at all if it were just the laptop. It's the fact that this is happening on all devices on my network that confuses me. Here's the screenshot from my iPhone, for reference. Can my home network be compromised? Is that even possible? How can something like this happen across all platforms on all devices in the same way? I wouldn't imagine every device / platform on the network would get the same virus. Should I assume that my network's security is totally compromised? Update: All mobile devices and laptops on my home network are experiencing the same alert when attempting to connect to google.com.

    Read the article

  • VLAN Through Switch Doesn't Work

    - by vcsjones
    I have the following scenario: I have a Cisco Aironet 1040 access point. I have it configured with two SSIDs, each going to a different VLAN. So: SSID internal : VLAN 90 SSID guest : VLAN 70 On the router side, I have a Cisco RV220W (with the radios now turned off) and have setup VLANs with like VLAN IDs. VLAN 90 : 192.168.90.0/24 VLAN 70 : 192.168.70.0/24 As far as DHCP is concerned, each VLAN has a "DHCP Server" in the router's configuration: So with the access point connected directly to the router, everything works great. I connect to the internal network, and I get a 192.168.90.x address, and the guest network gets a 70.xxx address. Next I introduced a Cisco SG200-50 PoE switch between the router and the access point. The port is configured as a trunk port, so the VLAN tags should go right through the switch back to the router. However, when something is connected to the access point, nothing works. It isn't able to get an IP address, and manually assigning one doesn't seem to let any traffic route. Given that the access point works correctly when connected to the router directly, I believe the switch is misconfigured. What am I missing here? What can I use to better diagnose what the problem might be? It's small business equipment, so CLI access is not available. Below are screenshots of the switch's config. The access point is connected to GE2.

    Read the article

  • IPC between multiple processes on multiple servers

    - by z8000
    Let's say you have 2 servers each with 8 CPU cores each. The servers each run 8 network services that each host an arbitrary number of long-lived TCP/IP client connections. Clients send messages to the services. The services do something based on the messages, and potentially notify N1 of the clients of state changes. Sure, it sounds like a botnet but it isn't. Consider how IRC works with c2s and s2s connections and s2s message relaying. The servers are in the same data center. The servers can communicate over a private VLAN @1GigE. Messages are < 1KB in size. How would you coordinate which services on which host should receive and relay messages to connected clients for state change messages? There's an infinite number of ways to solve this problem efficiently. AMQP (RabbitMQ, ZeroMQ, etc.) Spread Toolkit N^2 connections between allservices (bad) Heck, even run IRC! ... I'm looking for a solution that: perhaps exploits the fact that there's only a small closed cluster is easy to admin scales well is "dumb" (no weird edge cases) What are your experiences? What do you recommend? Thanks!

    Read the article

  • Outlook 2010 on WinXP runs once then refuses to run again until reboot

    - by msorens
    Since I installed Outlook 2010 on a new machine (WinXP Pro SP3) a couple months back I have had an issue that is quite annoying: If I close Outlook then attempt to restart it I get a small pop-up saying only: "Cannot start Microsoft Outlook". I found one workaround, but not a terribly practical one: reboot. If I reboot then launch Outlook, it opens fine. Here is what I know: Since I can run Outlook just fine after a reboot, I do not see that a system restore, an OS reinstall, or the like would help. I tried "outlook.exe /resetnavpane" and "outlook.exe /safe" but those give the same error. There are no entries in the event log. There is no instance of Outlook appearing in the process list once I close the program, so it does not seem to be an alias for "outlook is already running". As far as I have found, my situation is unique among reports of similar incidents: I have uncovered no other reports saying Outlook would run fine the first launch or that a reboot would again allow it to run. Suggestions?

    Read the article

  • How can I make non-anti-aliased text look good in Firefox on Mac OS X?

    - by cosmic.osmo
    After being a Windows user for the last 10 years, I got a MacBook Pro, which I'm working on configuring to my liking. I find small-size anti-aliased text to be blurry and hard to read, so I typically disable it. I've found the settings in the General Control Panel, and used TinkerTool to increase the anti-alias threshold size to 18pt. Mac OS X and other applications appear to respect these settings. A problem appears when I use Firefox. By default, it's configured to ignore the Mac OS anti-alias settings. This is changed by going to about:config, and setting gfx.use_text_smoothing_setting = true (default is false). However, even with this setting, it appears Firefox is still rendering the fonts under the assumption that they will be anti-aliased, which results in very odd and uneven spacing, as you can see in this example (pay attention to the placement of the "s" in "Disable"): With anti-aliasing: Without anti-aliasing: How can I configure Firefox to both not use anti-aliasing and to use correct font spacing? I'm using Mac OS X Lion and Firefox 5.

    Read the article

  • EC2 instances keep becoming inaccessible via SSH, can I use elastic loadbalancer to check SSH connectivity?

    - by Rick
    This is mainly an issue for my development ec2 server as it seems that my instance keeps becoming inaccessible via SSH. It happened yesterday so I killed that one and started a new one and happened again later today. The server still works, my web application is accessible in a web browser but whenever I try to connect via SSH I get a pemrission denied (public key) error message in my terminal. I am 100% sure I am doing nothing wrong as I can create a new instance of the exact same AMI (its a personal custom AMI), change absolutely nothing, including using the same .pem key, and then am able to SSH into that new instance using the exact same command as before (just changing the IP address). I understand that ec2 can have issues but having this happen every day seems a bit odd.. I am using an m2.xlarge instance so I don't know if these tend to be unstable, in the past I have used a small instance and had it running for months with no problems which is why I find this so odd. I am looking into using loadbalancing but it seems the only "health" checks they offer is for http or tcp so I'm not sure if I can make it monitor for SSH connectivity. This is important for development as I may make 1-2 new pushes of an application a day and use SSH to do this. I have a designer that needs to have the app always accessible as he works with the front-end files to test output with the live application. Anyways, any advice / info is appreciated

    Read the article

  • MongoDB on 128mb 32-bit VPS (plus Tornado and Redis)

    - by apito
    i am curious about how mongodb will perform in a limited vps. specifically, i'll deploy this configuration on 32-bit ubuntu 9.04 server with 128Mb memory (UPDATE: now i'm considering 360mb too). nginx and redis three instances of tornado apps (one is for mobile site; limited app, not my primary audience); has around 8 Collections. social webapp for my community. mongodb all beside mongodb seems to have small footprint. memory-mapping-wise, i dont know how mongodb will behave. i know it's a little bit a stretch to use this kind of config on a tiny vps, but that's what i can afford for now. i expect to have.. hmm.. maybe ~50 15rps. i did my homework doing a lot of frontend optimizations and yslow says grade A 91 (ruleset V2) :-) anyone willing to share experiences? eg. how big the data set size when mongo hit the ceiling, performance when mongo do a lot of disk IO, etc. thanks. UPDATE: this is my pet project. i'll get back to you when i have next spare time to do same httperf in a vbox with exact spec. suggestion how to do stress testing welcomed. i'm new to this kind of stuff.

    Read the article

  • Printing from Firefox on different printers and setting the page details beforehand

    - by user1162541
    I´ve got an odd problem and I have not been able to fix this. I have a computer which is connected to two printers. One is a receipt printer (EPSON TM-U220), and the other one is an impact printer (Epson LX-300+). From Firefox, I need to print on both printers at different moments. So first I print on the receipt printer, then on the impact printer, etc. However, whenever I first print on the receipt printer, and then go back to the impact printer, the printout is only the width of the page of the receipt printer. That is, the page does not come out properly, just the left part of the page is used for printing and the right part is completely empty, as if I am just printing on the small receipt paper. And there is no way I can tell Firefox that I am printing on this larger printer. The second print on the impact printer goes fine. Firefox now knows it is printing on the impact printer, and it comes out properly on the full page width. But every first print on the impact printer is using the wrong paper size. How can I fix this? When I go to PAGE PREVIEW I can not set the printer UNTIL I actually print the page. If I go to PRINT PREVIEW CONFIGURE PAGE, I can not set the printer I will be using. I can only do so if I go to PRINT PREVIEW PRINT (here is the dropdown box to set the printer). But I can only set the printer and then click PRINT, or CANCEL. If I click PRINT, then the computer remembers the setting but that page will come our wrong, and when I click cancel it simply does not remember the printer I just set.

    Read the article

  • Are there compact external USB audio interfaces which are better than a on-board sound?

    - by rumtscho
    I am asking this for a friend. He loves his voice recognition software and dictates a lot of text using a headset. Now he has a new laptop, which only has a combined mic/headphones output, and wanted to buy an adapter. I told him to get an external USB sound interface instead, as the better sound quality will probably increase the hit rate of the voice recognition. He agreed, but when he saw a picture of the SoundBlaster X-Fi, he said that it is way too big, because he wants to carry the thing everywhere. He'd rather have one of these small things which are the size of a flash memory stick, with only one mic and one phones output, period. Now I am not sure whether these mini interfaces would produce a sound better than onboard sound. They all seem to come not from established audio interface manufacturers, but from electronic accessories manufacturers like Speedlink, or just noname brands. Is there a compact audio interface with good A/D quality (it is OK if the price is comparable to that of the bigger interfaces, even if there is no additional functionality like Chinch in-/output etc)?. And if there isn't, will the noname soundcardsticks offer any advantage over a simple adaptor for the onboard sound?

    Read the article

  • "A disk read error occurred" after choosing to boot into Windows XP from GRUB

    - by kellogs
    "A disk read error occurred" appears on screen after choosing to boot into Windows XP from GRUB. [root@localhost linux]# fdisk -lu Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x48424841 Device Boot Start End Blocks Id System /dev/sda1 63 204214271 102107104+ 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 204214272 255606783 25696256 af HFS / HFS+ Partition 2 does not end on cylinder boundary. /dev/sda3 255606784 276488191 10440704 c W95 FAT32 (LBA) Partition 3 does not end on cylinder boundary. /dev/sda4 276490179 312576704 18043263 5 Extended /dev/sda5 * 276490240 286709759 5109760 83 Linux /dev/sda6 286712118 310488254 11888068+ b W95 FAT32 /dev/sda7 310488318 312576704 1044193+ 82 Linux swap / Solaris Here, sda is a 160GB hard disk with quite a few partitions and 3 OSes installed. I am able to boot into Linux and Mac OS fine, but not into Windows anymore. The Windows system is located on /dev/sda1. I cannot recall how exactly have I used testdisk but it once said: Disk /dev/sda - 160 GB / 149 GiB - CHS 19458 255 63 The harddisk (160 GB / 149 GiB) seems too small! (< 169 GB / 157 GiB) Check the harddisk size: HD jumper settings, BIOS detection... So far I have tried to "fixboot" and "chkdsk" from a recovery console on the affected windows partition (/dev/sda1), the plug off power cord for 15 seconds trick, reinstalling GRUB, repairing the MFT and boot sector of the affected partition via testdisk, what next please? Thank you!

    Read the article

  • VMware "boot screen" - add info like contact, phone number, etc.?

    - by TheCleaner
    I've tried searching Google and VMware's KB but maybe I'm not typing the right search criteria...only finding ways to fix problems with booting or screen issues. On the default boot screen of a host it looks similar to this picture from GIS: I'm curious if it is possible somehow to make it look like this instead (adding custom details): I know for the most part the info is "useless" since it is administered remotely, etc. But when I deploy standalone hosts to branch offices, it'd be nice for them to see this type of info on the boot screen. I may also include the VMs hosted on it (again on standalone hosts). Normal monitoring, etc. will be done remotely. This is strictly for odd times when the branch contact may say "the electricians are saying they need to turn off this circuit but I have no idea who to call in IT to tell them this box needs to be shut down" or similar. Anyone who has dealt with small branch offices can tell you that if it isn't labeled they easily forget what it is for and will simply say after the incident "I didn't know what it was or who to call." Possible?

    Read the article

  • Isolating a computer in the network

    - by Karma Soone
    I've got a small network and want to isolate one of the computers from the whole network. My Network: <----> Trusted PC 1 ADSL Router --> Netgear dg834g <----> Trusted PC 2 <----> Untrusted PC I want to isolate this untrusted PC in the network. That means the network should be secure against : * ARP Poisoning * Sniffing * Untrusted PC should not see / reach any other computers within the network but can go out the internet. Static DHCP and switch usage solves the problem of sniffing/ARP poisoning. I can enable IPSec between computers but the real problem is sniffing the traffic between the router and one of the trusted computers. Against getting a new IP address (second IP address from the same computer) I need a firewall with port security (I think) or I don't think my ADSL router supports that. To summarise I'm looking for a hardware firewall/router which can isolate one port from the rest of the network. Could you recommend such a hardware or can I easily accomplish that with my current network?

    Read the article

  • ApplicationPoolIdentity IIS 7.5 to SQL Server 2008 R2 not working.

    - by Jack
    I have a small ASP.NET test script that opens a connection to a SQL Server database on another machine in the domain. It isn't working in all cases. Setup: IIS 7.5 under W2K8R2 trying to connect to a remote SQL Server 2008 R2 instance. All machines are in the same domain. Using the ApplicationPoolIdentity for the web site it fails to connect to the SQL Server with the following: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. However if I switch the Process Model Identity to NETWORK SERVICE or my domain account the database connection is successful. I've granted the \$ access in SQL Server. I am not doing any sort of authentication on the web site, it is just a simple script to open a connection to a database to make sure it works. I have Anonymous Authentication enabled and set to use the Application pool identity. How do I make this work? Why is the ApplicationPoolIdentity trying to use ANONYMOUS LOGON? Better yet, how do I make it stop using the Anonymous logon?

    Read the article

  • Anonymous file sharing without login window, from Windows 7 server to XP clients

    - by Niten
    I'm trying to provide machines on a small LAN with read-only, anonymous access to files shared from a Windows 7 workstation (let's call it WIN7SVR). In particular, I don't want clients to have to deal with a login window when they navigate to, e.g., \\WIN7SVR in Windows Explorer, but we do not have a domain and synchronizing accounts between the server and clients would be intractable. There are both Windows 7 and Windows XP clients that need access to these shares. I got this working for Windows 7 clients by just enabling the Guest account on WIN7SVR and setting appropriate share permissions. Other Windows 7 machines automatically try logging in as Guest, it seems, so their users don't have to deal with the login window. The problem is with the XP clients--they can access the server if the user enters "Guest" in the login window, but I don't want users to have to do that. So from what I gather, in my limited understanding of Windows file sharing, this boils down to granting null sessions access to file shares on WIN7SVR. But I've had no success so far on that front. I've tried all the following in the local group policy editor on the Windows 7 server: Set Network access: Let Everyone permissions apply to anonymous users to Enabled Set Network access: Restrict anonymous access to Named Pipes and Shares to Disabled Added the names of corresponding shares to Network access: Shares that can be accessed anonymously Added "ANONYMOUS LOGON" to Access this computer from the network under User Rights Assignment Any advice would be highly appreciated... I'm mostly a Unix guy, so I feel somewhat out of my league with Windows file sharing. I do understand that any sort of anonymous access to file shares isn't generally ideal from a security standpoint, but it's the most practical solution for us in this case, and access to our network is well enough controlled that share-level security isn't a concern.

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

  • Time Machine vs Source Control?

    - by Blub
    Finally got convinced to start using some kind of version control for my code instead of zipping down a copy of the project at the end of each day. Downloaded Tortoise SVN and used it to create a repository localy on my hdd. I've been using it for 2 days now but I have to say that using it is actually more hassle than just copying the project manually in explorer. Sure, you only store incremental changes but with the cheap disks of today I can't really say that's an argument when you only have small projects. I haven't realy found a quick way to browse the older versions of my files eighter. What I want is an infinite undo that is completely transparent while I code, if I save the file I want a backup. I don't want to check out, check in and don't even get me started on moving files. I haven't tried Time Machine for OS X but it looks like it's exactly what I'm looking for. Does such a program exist for windows? Preferably free and with some kind of tagging-system so I can tag a timestamp when the project is working etc. Maybe should add that I mostly work alone on a single computer. Update: Some of you asked why I want backup. Since I work alone it's mostly to allow me to quickly hack up a solution without worrying that something will screw up.

    Read the article

  • XenServer VMs can't reach network

    - by toto
    I'm currently trying to setup a small cloud architecture , I'm using in the installation CloudStack 2.2.14 which need two node : a management server (as node1) to provision the cloud and a hyperviser XenServer 5.6 SP2 to host the VMs (as node2). I succeded to create both node1 and node2 into an ESXi 5 VMWare as VMs. So The ESXi 5 is hosting two VMs node1 + node2 , and node2 which is the XenServer will host also VMs (such as ubuntu or Centos). Both node1 and node2 can ping each other and can get the internet connection from Esxi5 ,but My problem is : that VMs into the node2(XenServer) can't reach the network (can't ping node1 or Esxi or get an internet connection but they can ping VMs IN the node2(XenServer). So I tried to: 1-Setup a DHCP server as node3 in ESXi5 and connect node2(Xenserver) to him , but always the VMs into to node2 can't reach the outer network. 2-Setup a DCHP server into node2 , but always the same problem. So , 1-is there any other configuration i'm missing in node2 (considering that I'm sure about DNS , GW , NETMASK configuration)?. 2-Is it the problem because i'm Creating VMs into node2(XenSever) which is a VM into ESXi 5 ?

    Read the article

  • Can I use @import to import Kod's default style sheet into my own?

    - by Thomas Upton
    I understand that Kod is being actively developed and is prone to drastic changes in any area. I would like to modify some small things (like font face and size or certain colors) while still being able to benefit from any changes or updates to the default Kod stylesheet. I thought that I would be able to @import the default stylesheet into my own to achieve this. This is what ~/.kod/custom.css would look like, @import url("file:///Applications/Kod.app/Contents/Resources/style/default.css"); /* Change the default font face and color. */ body { font-family: Menlo, monospace; color: #efefef; } This stylesheet was set with the following defaults command, per the comments at the top of Kod's default CSS file: defaults write se.hunch.kod style/url ~/.kod/custom.css Unfortunately, this didn't work. When I first tried to reload the style, Kod crashed. It opened fine again, but the @import statement wasn't working, and Kod crashed every time I saved the custom.css file. Am I doing something wrong? Did I write my @import statement wrong? Is that not how @import is supposed to work? Did I miss some sort of documentation or Kod Google Groups post that mentions that Kod explicitly disallows this?

    Read the article

  • How to run Fujitsu P27T-7 LED monitor in its not native resolution and have perfect fonts rendering

    - by Ilia Rostovtsev
    My problem is completely opposite to anything I could find as I need to run my monitor in its NOT native resolution and have perfect font rendering. I recently got myself Ultra HD 2560x1440 27 inch monitor (Fujitsu P27T-7 LED) and I have an issue with this. I would call it personal but I'm afraid it's not as few people already agreed with me. I do programming and the text on UHD is way to small for comfortable usage. I changed the resolution to regular Full HD (1920x1080), it became just right but the text is looking slightly blur now, in comparison to both its natural UHD resolution and/or to my old 23 inch NEC. I am pretty frustrated and not sure what to do and how to make fonts look just as sleek as they should? I can't work in UHD resolution (my vision is 100% perfect), simply if calculated, picture size with Ultra HD (2560x1440) on 27 inch is around 30% smaller than Full HD (1920x1080) on 23 inch. In order to have same font size, if compared with Full HD 23 inch, 27 inch Ultra HD monitor must be around 32 inches in size. If I set my new monitor to regular Full HD 1920x1080, then the fonts' size are just perfect but the quality is not as it's blurry? Could anyone please help me out with an advise of how to solve this problem? Spec: nVidia 560 Ti with DVI-D port on Fedora 20. EDIT 1: Changing fonts doesn't really help as everything else doesn't look the way it should. EDIT 2: The monitor is buzzing on 2560x1440 so badly in case there are lots of lines on the screen, like file listing. If I type ls /usr/bin it makes such nasty irritating sound. When resolution goes to 1920x1080 it's a bit better. Any idea why?

    Read the article

  • ubuntu 9.04 pptp broken after a power failure

    - by kevin42
    I have a small Ubuntu 9.04 router setup as a NAT box and a PPTP server. After a power failure everything except the PPTP server still works. A windows client gets to "registering your computer on the network" but then says Error 742: The remote computer does not support the required data encryption type. I did some research and I think the problem is with the ppp_mppe module. When I try to run 'modprobe ppp_mppe' it hangs indefinitely. What would cause this hang? Any ideas how I can troubleshoot this further? Thanks for the help! UPDATE: I am still having the problem, however I have found some more information. When the first user tries to connect to pptp, the process list shows modprobe sha1 running, and one instance of modprobe ppp_mppe for each connection attempt. If I killall modprobe at this point the next connection attempt works, and everything is fine until the next reboot. I'm planning to do a clean install at some point in the future but I'd really like to get to the real cause of this.

    Read the article

< Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >