Search Results

Search found 10726 results on 430 pages for 'big rich'.

Page 325/430 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Python not Working in Vim

    - by jdg
    I have a new install of VIM from the automatic windows installer: gvim73_46.exe I have Python 2.7 (32 bit) installed. If I open gvim, and type: :set python? I get E518: Unknown option. If I try typing: :python 'hello' Vim crashes. What could be wrong? Here are the contents of :version in case they are helpful, although python is installed, and it is using Python 2.7. I also checked, and C:\Windows\System32\python27.dll is where it should be... I am really lost here. Does anyone have any ideas as to what is going wrong? VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 27 2010 17:59:02) MS-Windows 32-bit GUI version with OLE support Included patches: 1-46 Compiled by Bram@KIBAALE Big version with GUI. Features included (+) or not (-): +arabic +autocmd +balloon_eval +browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con_gui +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +gettext/dyn -hangul_input +iconv/dyn +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse +mouseshape +multi_byte_ime/dyn +multi_lang -mzscheme +netbeans_intg +ole -osfiletype +path_extra +perl/dyn +persistent_undo -postscript +printer -profile +python/dyn +python3/dyn +quickfix +reltime +rightleft +ruby/dyn +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white +tcl/dyn -tgetent -termresponse +textobjects +title +toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -xfontset -xim -xterm_save +xpm_w32 system vimrc file: "$VIM\vimrc" user vimrc file: "$HOME_vimrc" 2nd user vimrc file: "$VIM_vimrc" user exrc file: "$HOME_exrc" 2nd user exrc file: "$VIM_exrc" system gvimrc file: "$VIM\gvimrc" user gvimrc file: "$HOME_gvimrc" 2nd user gvimrc file: "$VIM_gvimrc" system menu file: "$VIMRUNTIME\menu.vim" Compilation: cl -c /W3 /nologo -I. -Iproto -DHAVE_PATHDEF -DWIN32 -DFEAT_CSCOPE -DFEAT_NETBEANS_INTG -DFEAT_XPM_W32 -DWINVER=0x0400 -D_WIN32_WINNT=0x0400 /Fo.\ObjGOLYHTR/ /Ox /GL -DNDEBUG /Zl /MT -DFEAT_OLE -DFEAT_MBYTE_IME -DDYNAMIC_IME -DFEAT_GUI_W32 -DDYNAMIC_ICONV -DDYNAMIC_GETTEXT -DFEAT_TCL -DDYNAMIC_TCL -DDYNAMIC_TCL_DLL=\"tcl83.dll\" -DDYNAMIC_TCL_VER=\"8.3\" -DFEAT_PYTHON -DDYNAMIC_PYTHON -DDYNAMIC_PYTHON_DLL=\"python27.dll\" -DFEAT_PYTHON3 -DDYNAMIC_PYTHON3 -DDYNAMIC_PYTHON3_DLL=\"python31.dll\" -DFEAT_PERL -DDYNAMIC_PERL -DDYNAMIC_PERL_DLL=\"perl512.dll\" -DFEAT_RUBY -DDYNAMIC_RUBY -DDYNAMIC_RUBY_VER=191 -DDYNAMIC_RUBY_DLL=\"msvcrt-ruby191.dll\" -DFEAT_BIG /Fd.\ObjGOLYHTR/ /Zi Linking: link /RELEASE /nologo /subsystem:windows /LTCG:STATUS oldnames.lib kernel32.lib advapi32.lib shell32.lib gdi32.lib comdlg32.lib ole32.lib uuid.lib /machine:i386 /nodefaultlib gdi32.lib version.lib winspool.lib comctl32.lib advapi32.lib shell32.lib /machine:i386 /nodefaultlib libcmt.lib oleaut32.lib user32.lib /nodefaultlib:python27.lib /nodefaultlib:python31.lib e:\tcl\lib\tclstub83.lib WSock32.lib e:\xpm\lib\libXpm.lib /PDB:gvim.pdb -debug

    Read the article

  • Reconnecting OST with Exchange

    - by syrenity
    Hi. I have a quite big problem with customer's MS Exchange. The server got it's disk filled about 2 weeks ago, so it's currently offline. They plan to upgrade it, but not in hurry, as they use it mainly for OWA and back-up - the mails exchange is done via SMTP and POP3. Trying to diagnose some problem today, one of the users has (following the ISP instructions), removed the Exchange account from Outlook, which essentially left the OST orphaned. The user naturally didn't move the emails or any other data to the Archive / PST before, so these emails located on the OST only. So currently I'm trying to figure out how to restore them. There are 2 options: 1) Make the user buy some tool to convert them to PST, and import as archive / main Outlok file? 2) Reconnect the Outlook to Exchange (once it up), let it sync the old server content, then shutdown Outlook and replace the new OST with the old one, start Outlook again in offline mode and move these files to archive. 3) Any other method? Can someone advice what would be the best approach here? The used versions are Outlook 2007 and Exchange 2003. Thanks!

    Read the article

  • FTP Server with advanced features

    - by Nikolas Sakic
    Hi, We supply zone-files to our customers. Some zone files are big about 300MB and some are quite small, maybe like 1MB. We had this issue that someone setup a script to continually download the file. Imagine downloading 300MB file a few hundred times a day. Since, we don't have packet-shaper to throttle the traffic, we need to upgrade ftp server and use add-on modules to limit the download somehow. We currently use proftpd server. Also note that there are different users for different domains - say, if you want to download zone file for .INFO domain, then you use a particular user. That user can't download any other zone's file. This is what we are looking for: Have maximum of 400MB download per user per day. Or even have different download limit for different users per day. Have one connection per user at any time. Max # of connection (non-simultaneous) per user per day is 5. Anyone trying to exceed that gets banned for 24 hours. Has anyone used FTP server with similar restrictions above? Does anyone have any ideas where I can start? Any help would be appreciated. Thanks. -N

    Read the article

  • Most scalable way of serving a small set of static HTTP content

    - by Ekevoo
    The story: Hi guys. I'm among the people responsible for serving the results of the most anticipated (by number of people participating) annual entrance exam in my state. As such, when our results are published, the interest is overwhelming. In the past we delegated the responsibility of serving the results to the media, but that spoils a little the officialness of these results. This year we went with a little (long overdue) experiment of using lighttpd instead of Apache as well as other physical network optimizations I wasn't directly involved with. The results were very satisfactory. The server didn't choke even once, nor we saw any of the usual Twitter complaints on unavailability and/or slowness that were previously common. However, because we still delegated the first publication of the results to the media I'm still not 100% sure we can handle the load of actually publishing the results first. The question: Now because these files are like 14MB in total and a true lightweight Linux distribution isn't that big either, I'm thinking: what if next year we run full RAMdrive? Is there any? Is that useful? Is that worth it for a team that uses Debian almost exclusively? Are there other optimizations that I should be focusing on instead?

    Read the article

  • Windows 7 doesn't start anymore

    - by martani_net
    Hi, I've experienced some BSOD on windows 7 RC, and some freezing when startup, but today was big surprise, it doesn't start anymore. I tried to start on safe mode and no results too, it shows the starting animation, then a blue screen for less than a second and turns off immediately. The only thing I remember did today is update flash player under Firefox, then chrome stopped working even after logging off, and once restarting, it doesn't start anymore. Anyone experienced the same issue? any hints? [EDIT 3] Solved : Windows 7 have a very smart repair strategy, it works automatically, and it tried every possible fix, what fixed my problem was the system restore to a previous date, all this happened automatically. [EDIT2] these are the last lines in the ntbtlog.txt file Did not load driver \SystemRoot\System32\drivers\vga.sys Loaded driver \SystemRoot\System32\Drivers\NDProxy.SYS Did not load driver \SystemRoot\System32\Drivers\NDProxy.SYS Did not load driver \SystemRoot\System32\Drivers\NDProxy.SYS Did not load driver \SystemRoot\System32\Drivers\NDProxy.SYS Did not load driver \SystemRoot\System32\Drivers\NDProxy.SYS Loaded driver \SystemRoot\system32\drivers\CHDRT32.sys Loaded driver \SystemRoot\system32\DRIVERS\VSTAZL3.SYS Loaded driver \SystemRoot\system32\DRIVERS\VSTDPV3.SYS Loaded driver \SystemRoot\system32\DRIVERS\VSTCNXT3.SYS Loaded driver \SystemRoot\system32\drivers\modem.sys Loaded driver \SystemRoot\system32\DRIVERS\usbccgp.sys Loaded driver \SystemRoot\System32\Drivers\usbvideo.sys [edit] this is the BSOD I get : http://twitpic.com/i87cx Thank you.

    Read the article

  • How to set up a easy-to-use proxy for the whole system with WinXP client and server?

    - by Pekka
    I am working together intensively with a colleague on the Canary Islands. We speak through live messenger and work together using a RDP software. She has frequent problems with connections to certain big-name and small-name sites (amongst others live.com, google.com, gmx.de) very likely to be caused by the spanish provider (the connections simply time out, this has been going on for weeks already). I have been thinking about setting up my computer as a proxy to make these connections work. I have a DSL connection and am behind a NAT capable router that I control. Does anybody know a simple, "one-click" way to transport ALL network traffic through a remote proxy? Without having to set proxy settings for each application that uses the internet? VPN is not an option, because I am behind a firewall that supports protocol 47 and such, but I have never succeeded in getting an incoming VPN connection to work. I can however redirect normal traffic using NAT. A VPN solution that does not need strange protocols would also be an option.

    Read the article

  • What is the peak theoretical WiFi G user density? [closed]

    - by Bigbio2002
    I've seen a few WiFi capacity planning questions, and this one is related, but hopefully different enough not to be closed. Also, this is related specifically to 802.11g, but a similar question could be made for N. In order to squeeze more WiFi users into a space, the transmit power on the APs need to be reduced and the APs squeezed closer together. My question is, how far can you practically take this before the network becomes unusable? There will come a point where the transmit power is so weak that nobody will actually be able to pick up a connection, or be constantly roaming to/from APs spaced a few feet apart as they walk around. There are also only 3 available channels to use as well, which is a factor to consider. After determining the peak AP density, then multiply by users-per-AP, which should be easier to find out. After factoring all of this in and running some back-of-the-envelope calculations, I'd like to be able to get a figure of "XX users per 10ft^2" or something. This can be considered the physical limit of WiFi, and will keep people from asking about getting 3,000 people in a ballroom conference on WiFi. Can anyone with WiFi experience chime in, or better yet, provide some calculations for a more accurate figure? Assumptions: Let's assume an ideal environment with no reflection (think of a big, square, open room, with the APs spaced out on a plane), APs are placed on the ceiling so humans won't absorb the waves, and the only interference are from the APs themselves and the devices. As for what devices specifically, that's irrelevant for the first point of the question (AP density, so only channel and transmit power should matter). User experience: Wikipedia states that Wireless G has about 22Mbps maximum effective throughput, or about 2.75MB/s. For the purpose of this question, anything below 100KB/s per user can be deemed to be a poor user experience. As for roaming, I'll assume the user is standing in the same place, so hopefully that will be a non-issue.

    Read the article

  • Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most

    - by Henno
    Problem Network speed between a VM and another machine which is not residing on the same host, is 11MB/s at most. Topology Facts ESXi5 version is 5.0.0.504890 VM has the latest Vmware Tools installed VM is using E1000 network driver Physical box has Win Srv 2008 R2 as the OS CrystalDiskMark says the drive on physical box can read/write 100MB/s vCenter is another vm on esx both vm and physical box are showing 1Gbps link speed Configuration Networking shows vmnic0 as 1000 Full NTttcp is a client/server tool from Microsoft for measuring pure network throughput Here's what I've done so far: Test1: VM is running Filezilla FTP Server (default settings, one user account made) Physical box is running Filezilla FTP Client (default settings) Physical box is uploading a big file to FTP server Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Physical box is downloading that file from FTP server Transfer speed (as observed by Windows Task Manager on both machines): still ~11MB/s (bad) Could it be disk performance issue? Test2: Physical box is running ntttcpr.exe -a 6 -m 6,0,VM_IP_ADDRESS VM is running ntttcps.exe -a 6 -m 6,0,PHY_BOX_IP_ADDRESS Transfer speed (as observed by Windows Task Manager on both machines): ~11MB/s (bad) Could it be switch performance issue? Test3: physical box is running vSphere Client I open Summary Storage datastore Browse Datastore... from physical box and upload a file to datastore Transfer speed (as observed by Windows Task Manager on physical box): ~26-36MB/s (good) Could it be a vm specific issue? Test4: Installed ntttcp to another vm on the same esx server Measured network performance between vms on the same esx server with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~90-120MB/s (excellent :) Test5: I have another esx server on the same site, connecting to the same datastore and same switch. Those two ESX servers have both 2 NICs. One NIC goes to switch while the other goes directly to the other ESX server. vMotioned one of the testing vms off to the other ESX host Measured network performance between vms on different esx servers with NTttcp Transfer speed (as observed by Windows Task Manager on physical box): ~11MB/s (bad) While I'm aware of these: ESXi 4.1 slow file transfer ESXi 5 network performance is slow Debian Etch and ESXi slow network speeds VMWare ESXi slow file copy to guest they did not help (or I must have been missed something)

    Read the article

  • Is Cherokee (probably) the best static content server for beginner sysadmins?

    - by Bad Learner
    I have read the pros and cons of most of the popular web servers and have come to a conclusion that Apache would (probably) be the best web server for serving dynamic content - - no wonder YouTube, Flickr and Facbook, among many others, use it. I do not know if that C10K problem applies to Apache even when serving dynamic content only, but I think any web server used to serve dynamic content needs some good tweaking for optimized performance, and the fact that nothing beats Apache when it comes to documentation, resources and support on the web, I think should will go with Apache for dynamic content. That apart, the confusion begins when it comes to choosing web servers for static content (including streaming videos). I see that Nginx, Cherokee and Lighttpd are among the best (I am not considering non-open source or non-linux stuff here). So, which too choose? I know one cannot go wrong with any of the three (Nginx, Cherokee, Lighttpd). Lighttpd's development has evidently gotten slower than it was a good time ago. The documentation is pretty good for all the three, and hopefully, so are the resources (knowledge of these among the users of Stackoverflow/Serverfault sites, the web etc). Precisely, and noting point [2] and [3], if I am not wrong, I should either go with Nginx or Cherokee. I would love to see someone clarify these... is Cherokee just as fast (mb/s), performant (connections/s), and reliable (think downtime/restarting server) as Nginx for serving static content and load balancing, for small, medium to large (and really large) websites and applications? (Think, the size of YouTube, Apache or Facebook.) if the answer for the Q above is a big "hell, yes!" then, I should probably prefer Cherokee, right? Because, since I am a beginner, it would a lot easier to setup Cherokee as it has a graphical admin user interface + really good documentation. Yes? I could be wrong, I could be right. I put down what I know so that you can offer most relevant advise. Pardon if anything I've said is offensive.

    Read the article

  • vim coloring for git

    - by kelloti
    I'm on Windows and my vim loads with a terrible colorscheme with vim. The message is blue on black (so I can't see what I'm typing). I need to change the colorscheme, but :colorscheme slate doesn't do anything. :version vim - vi improved 7.3 (2010 aug 15, compiled oct 27 2010 17:51:38) ms-windows 32-bit console version included patches: 1-46 compiled by bram@kibaale big version without gui. features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +gettext/dyn -hangul_input +iconv/dyn +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape +multi_byte +multi_lang -mzscheme -netbeans_intg -osfiletype +path_extra -perl +persistent_undo -postscript +printer -profile -python -python3 +quickfix +reltime +rightleft -ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl -tgetent -termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -xfontset -xim -xterm_save -xpm_w32 system vimrc file: "$vim\vimrc" user vimrc file: "$home\_vimrc" 2nd user vimrc file: "$vim\_vimrc" user exrc file: "$home\_exrc" 2nd user exrc file: "$vim\_exrc" compilation: cl -c /w3 /nologo -i. -iproto -dhave_pathdef -dwin32 -dfeat_cscope -dwinver=0x0400 -d_win32_winnt=0x0400 /fo.\objc/ /ox /gl -dndebug /zl /mt -ddynamic_iconv -ddynamic_gettext -dfeat_big /fd.\objc/ /zi linking: link /release /nologo /subsystem:console /ltcg:status oldnames.lib kernel32.lib advapi32.lib shell32.lib gdi32.lib comdlg32.lib ole32.lib uuid.lib /machine:i386 /nodefaultlib libcmt.lib user32.lib /pdb:vim.pdb -debug My $HOME\_vimrc looks like colorscheme slate syn on set shiftwidth=2 set tabstop=2 and my $VIM\vimrc is the stock vimrc that comes with the Windows Vim distribution. How do I change my console Vim colorscheme? Especially for Git commits.

    Read the article

  • Mac mini simple customized, Mac mini server or other?

    - by microspino
    I'm in front of a big IT choice for my little office and I need some advice. We have 5 users, 1 super user, 1 HP500 DesignJet Plotter, other 4 laser printers, 1 HP Fax/Print/Scan/Copy machine. All the clients are XP Sp3 boxes. We would like to: centralize and share 90Gb of files using a Dropbox (this way we will have LAN sync of local working directories + internet backup + access our files wherever we are). centralize our plotter, printers and fax machine backup all the workstations share outlook calendar and tasks run 24x7 saving some energy Of course this setup It's just the first step to a more serious and creative network management of our office, so we are open to new ideas. The budget vary from 400€ to 900€, we are not tech gurus but at least one of us is a power user close to become a geek. I've read some articles on macminicolo about a mac mini either normal or with snow leopard server. I heard about Windows Home Server too on the lifehacker website but I'm in a sort of analysis - paralysis can You help me?

    Read the article

  • Detecting login credentials abuse

    Greetings. I am the webmaster for a small, growing industrial association. Soon, I will have to implement a restricted, members-only section for the website. The problem is that our organization membership both includes big companies as well as amateur “clubs” (it's a relatively new industry…). It is clear that those clubs will share the login ID they will use to log onto our website. The problem is to detect whether one of their members will share the login credentials with people who would not normally supposed to be accessing the website (there is no objection for such a club to have all it’s members get on the website). I have thought about logging along with each sign-on the IP address as well as the OS and the browser used; if the OS/Browser stays constant and there are no more than, say, 10 different IP addresses, the account is clearly used by very few different computers. But if there are 50 OS/Browser combination and 150 different IPs, the credentials have obviously been disseminated far, and there would be then cause for action, such as modifying the password. Of course, it is extremely annoying when your password is being unilaterally changed. So, for this problem, I thought about allowing the “clubs” to manage their own list of sub-accounts, and therefore if abuse is suspected, the user responsible would be easily pinned-down, and this “sub-member” alone would face the annoyance of a password change. Question: What potential problems would anyone see with such an approach?

    Read the article

  • Backup Exec tape rotation guidelines

    - by HannesFostie
    Hi We use Backup Exec to take care of our backups for our data server, exchange server, and one more set of systems. Each of these 3 is being done on a separate "set" of tapes. Our goal is to be able to roll back a full 2 weeks, with 1 full backup each weekend and differential/incremental backups in between (the difference between the two in our case isn't very big, because the employees mostly use a very similar set of files throughout the week). While playing around with the settings on how to achieve this, we set the time for BE to keep the full backup to 14 days, but because we have too much data this would require manual intervention each time to erase a certain tape and use that. What I would like to know is what kind of guidelines, tricks, tips and general "stuff to think about" you keep in mind when designing your backup schedule. The type of backups (full/diff/incr) isn't of that much importance in our case as it's more or less set in stone. Made this community wiki as it's not a very specific question. Thanks in advance!

    Read the article

  • Application for time and project management

    - by user10826
    I want to improve the way I organize my projects/tasks/schedule What I do now is: keep an excel sheet with the name of the most important tasks/projects, I look at it at the beginning of each day and decide the ones I will focus on on iCal I write down events for each day, or for a concrete time (13 to 14 hours). I set up each day the tasks I want to accomlish, and allocate them hours I use Things (culture code) to keep info about tasks and projects not very important and which are not time allocated yet (GTD name = someday) I use Mail on Mac and create folders for the mails I want to process with the name of the different projects I save the main info for each project on freemind maps My system works well at the moment but it is pretty complicated to use. I want to make it better and I am looking for something with these requirements: must be 100% offline accessable it should use as less programs/resources as possible, ideally just one program should be able to manage all my info I can use the GTD methodology mixed with priorities and I can allocate each task converted to event on my calendar I can have different daily/weekly, etc views on a calendar to see the "big picture" must run on mac os x leopard price does not matter, I will pay for this So, according to your experience, can you recommend me something like this? Thanks

    Read the article

  • Why is the Windows 8 recycle bin using more space than it is allocated?

    - by oldmankit
    I ran WinDirStat to scan the contents of my hard drive. I was surprised to see that the $RECYCLE.BIN folder on my D: drive takes 26 GB of space. I emptied the recycle bin, refreshed the folder in WinDirStat, but it still takes 26 GB of space. I reduced the Maximum size of the recycle bin for this drive to 10000 MB for the main user of this computer, and disabled the recycle bin for the other user, and refreshed the folder in WinDirStat, but it still takes 26 GB of space. I ran (in an elevated window) rd /s D:\$Recycle.bin, and refreshed the folder in WinDirStat, and finally it became empty. Why was it taking up space even after I emptied it? Why was it taking more space (26 GB) than the maximum allowed amount (10 GB)? Update: After six months of using Windows (no re-install and no changes of settings related to the Recycle Bin) I used WinDirStat to check how big D:\$RECYCLE.BIN has become. It is now 29 GB. In Recycle Bin Properties, I select drive D, and it is still a custom maximum size (10000 MB).

    Read the article

  • Green flickering pixels that move with black images

    - by user568458
    Strange question... Occasionally, on my LCD screen, pixels that should be black flicker rapidly and constantly between black and green, about 4 flickers a second. The crazy part is, unlike dead/stuck pixels, they are relative to content on the screen and move with it. For example, I might be looking at a web page with a picture that has lots of black. There might be a couple of green flashing pixels in that black that shouldn't be there. I scroll the page, and the green flickering pixels move with the image. It seems that everyphysical pixel is fine, but somehow something interprets part of the image in a way that causes flickering green... It's not just in a web browser. My first thought was to blame a trolling blogger cunningly uploading an animated gif that simulates a failing pixel... but it happens in a wide range of applications. It seems to occur randomly, other than that it seems to only occur in areas of pure black, and it's always pure 100% green. It happens rarely enough that it's not a big deal, but it's such a strange problem it bugs me. I can't find any info on anything like this. I'm not even sure if it's hardware or software. Any ideas? (windows 7 laptop connected to LCD by DVI to HDMI cable)

    Read the article

  • Legal IT documents

    - by TylerShads
    I have been wondering this past week because my big boss told me to start keeping track of all the things I have fixed, how to fix them, etc. Which is reasonable and have been doing anyway. But then a related question came to mind. What kind of documentation should I have on hand as far as users go. More specifically I am talking in terms of EULA, ToC, etc (correct me please if I'm using the wrong terms) Or more specifically a policy, so to speak, for the users and such. Can't say I'm a legal expert, otherwise I'd be a lawyer. The environment the users are in is pretty laid back so I don't forsee a problem. But assume that there should ever arise a problem, what should I have written up/have on hand? EDIT: I really should have noted that we are a medical transport facility and have patient records so I know that something must be done there to comply with HIPAA policies I believe. I do like what anthonysomerset said about the "If I get by a bus" Scenario and want to apply it not only to the documentation I am currently writing but also for if say an employee were to steal info from the server or edge cases, theft, etc. As far as our staff, its relatively small as in a single HR person, no legal department aside from the 2 owners' lawyers and me being the only IT person on staff with a guy who is no more than a mac superuser.

    Read the article

  • How can I report a website that uses the webmail APIs to send spam?

    - by Igoru
    I've signed up for a cool job website that, unfortunately, asks you if you want to "invite your friends", and if you say so, you can give them access to your Gmail contacts to send the invite. However, contrary to what everyone would be expecting, they don't give you a list of who you want to invite; instead, they simply directly send spam to your entire contact list, like old-fashioned Outlook viruses. When you complain about this with them, they simply say "we will check the application and see if there is anything that might be confusing for the users". For me and some other friends (that felt for the same prank), this is a clear break on web best practices and a big disrespect on the users' trust. Thus, I would like to know what can we do to stop the website of using Gmail/Yahoo/Outlook APIs to send spam this way. P.S.: I wonder what would happen if I've given this website the access to post in my Facebook timeline as well. I've got a couple of calls from relatives asking about the email and I wonder how many unrelated people got this spam, like HR addresses from my past and whatnot.

    Read the article

  • Tunneling a public IP to a remote machine

    - by Jim Paris
    I have a Linux server A with a block of 5 public IP addresses, 8.8.8.122/29. Currently, 8.8.8.122 is assigned to eth0, and 8.8.8.123 is assigned to eth0:1. I have another Linux machine B in a remote location, behind NAT. I would like to set up an tunnel between the two so that B can use the IP address 8.8.8.123 as its primary IP address. OpenVPN is probably the answer, but I can't quite figure out how to set things up (topology subnet or topology p2p might be appropriate. Or should I be using Ethernet bridging?). Security and encryption is not a big concern at this point, so GRE would be fine too -- machine B will be coming from a known IP address and can be authenticated based on that. How can I do this? Can anyone suggest an OpenVPN config, or some other approach, that could work in this situation? Ideally, it would also be able to handle multiple clients (e.g. share all four of spare IPs with other machines), without letting those clients use IPs to which they are not entitled.

    Read the article

  • How To Replace Laptop HDD Without Losing Data?

    - by Ishan
    Hello, I recently went to Dell Service center and they tell that HDD is faulty and needs to be replaced. I have a Studio 1457 laptop with 500 GB HDD and don't want to lose the data(purchased in May 2010, still under warranty). I have searched a bit and I think it may be best to use a disk imaging software for this task. However, I don't know about a good software. I have following steps in mind: Get a 1 TB External HDD. Make an image of existing 500 GB HDD and store data on external disk. Install new HDD and install a brand new Windows copy and then install the software on it. Using the same software I used to make image, restore the old HDD image on new one. However, I have some questions in mind. First, is this possible? Second, I live in a country where piracy is a big issue and I am sure the support executive who will come to change HDD will have a pirated copy. But I have genuine Windows 7 Pro and don't want to lose it. Now, Dell does not supply and OS disks, so I can't install it on new HDD! If I follow above steps, which version of Windows 7 will be retained? One in the image(authentic) or one in the new HDD(pirated). I am ready to purchase a good software for this task and my budget is $50-60. Since laptop is under warranty, new HDD will be free. One last thing, I have created a Windows Migration file whose size is 70 GB. Can it be used to move from Windows 7 Pro to Windows 7 Pro?(In case I get a genuine copy of Windows 7!) Any other method to save all the data? Thanks in advance.

    Read the article

  • How to share text, pictures and video to restricted set of users

    - by joaoc
    I want to share pictures and videos of my kid growing up but I don't want it to be open to the public. I just want me and my wife to be able to add material and then share with grandparents and friends who would be given a username/password or some other solution that authenticates them. This content would preferably be available remotely over the internet or downloaded periodically to the allowed viewers computers. The second option most likely means they would have to install a client, which I am not too fond of doing (I would have to ask them to install it, help configure for different platforms, ...) I'm at the start but I would like software that also allows export of the data (if one day I want to migrate to a different solution). Folder sharing I don't know if Windows folder sharing is reliable, robust and safe enough to share over the internet. Dropbox could be a software solution, but it's limited to 2GB and requires receivers to also install and configure software. A folder is just a collection of files, so it would be harder to keep them organized with text describing the pictures/video Email I could email the content every now and then but videos are almost always too big to go as attachments This would also not provide a history of updates to anyone just recently joining in Are there other solutions to achieve this? NOTE: I also thought that software to run a local or remote blog could be an option but we aren't allowed to ask those questions on superuser because one admin doesn't like cloud-computing questions. But if you do think it's the best approach and can suggest a solution, do state it in the answers anyway!

    Read the article

  • Are there compact external USB audio interfaces which are better than a on-board sound?

    - by rumtscho
    I am asking this for a friend. He loves his voice recognition software and dictates a lot of text using a headset. Now he has a new laptop, which only has a combined mic/headphones output, and wanted to buy an adapter. I told him to get an external USB sound interface instead, as the better sound quality will probably increase the hit rate of the voice recognition. He agreed, but when he saw a picture of the SoundBlaster X-Fi, he said that it is way too big, because he wants to carry the thing everywhere. He'd rather have one of these small things which are the size of a flash memory stick, with only one mic and one phones output, period. Now I am not sure whether these mini interfaces would produce a sound better than onboard sound. They all seem to come not from established audio interface manufacturers, but from electronic accessories manufacturers like Speedlink, or just noname brands. Is there a compact audio interface with good A/D quality (it is OK if the price is comparable to that of the bigger interfaces, even if there is no additional functionality like Chinch in-/output etc)?. And if there isn't, will the noname soundcardsticks offer any advantage over a simple adaptor for the onboard sound?

    Read the article

  • MongoDB on 128mb 32-bit VPS (plus Tornado and Redis)

    - by apito
    i am curious about how mongodb will perform in a limited vps. specifically, i'll deploy this configuration on 32-bit ubuntu 9.04 server with 128Mb memory (UPDATE: now i'm considering 360mb too). nginx and redis three instances of tornado apps (one is for mobile site; limited app, not my primary audience); has around 8 Collections. social webapp for my community. mongodb all beside mongodb seems to have small footprint. memory-mapping-wise, i dont know how mongodb will behave. i know it's a little bit a stretch to use this kind of config on a tiny vps, but that's what i can afford for now. i expect to have.. hmm.. maybe ~50 15rps. i did my homework doing a lot of frontend optimizations and yslow says grade A 91 (ruleset V2) :-) anyone willing to share experiences? eg. how big the data set size when mongo hit the ceiling, performance when mongo do a lot of disk IO, etc. thanks. UPDATE: this is my pet project. i'll get back to you when i have next spare time to do same httperf in a vbox with exact spec. suggestion how to do stress testing welcomed. i'm new to this kind of stuff.

    Read the article

  • Barriers to IPv6 deployment: addressing

    - by sysadmin1138
    There are several things that are keeping IPv6 deployment from being a topic of active discussion here at my work. There are the usual technical issues, but one non-technical one appears to be a major stumbling block on the path to actually getting a deployment project going. Addresses, memorizing of. Specifically, IPv4 addresses are comprehensible, and IPv6 addresses just look like a big long string of hex. The human mind has real trouble memorizing lists of more than 7-8 items, and an IPv4 address (192.168.231.148) has four items in it which makes it easy for us to memorize. A fully populated IPv6 address has not only 8 sections, but each section has 4 hex digits in it. IPv6 addresses were not designed for memorization. To the technician who knows that the DNS server is at 192.168.42.42 (or more likely "42.42", since the company prefix is likely memorized), the idea of memorizing an IPv6 address fills them with dread. Which in turn makes them much less enthusiastic about participating in an IPv6 deployment project. Because of how our network works we're not fully dynamic in terms of v4 addressing. We have several to many subnets that are entirely statically assigned for a variety of reasons, chief among them being that the overhead of static DHCP assignments is perceived as being too great. Also, some devices still aren't smart enough to pull DNS addresses out of DHCP while also having a static assignment, and therefore require manually configured DNS settings. Therefore, some v6 address memorization will have to be done. We're not under any mandate to get v6 out the door, so we don't have pressure from the top. However, it is time to start prepping our infrastructure to handle IPv6 even if we don't convert wholesale. For those of you who have been in IPv6-land for a while, what short-cut methods do you use to discuss or keep track of subnets and specific/critical IP addresses? If I can help reduce some of the dread surrounding IPv6 we might get the project going.

    Read the article

  • Scaling databases with cheap SSD hard drives

    - by Dennis Kashkin
    Hey guys! I hope that many of you are working with high traffic database-driven websites, and chances are that your main scalability issues are in the database. I noticed a couple of things lately: Most large databases require a team of DBAs in order to scale. They constantly struggle with limitations of hard drives and end up with very expensive solutions (SANs or large RAIDs, frequent maintenance windows for defragging and repartitioning, etc.) The actual annual cost of maintaining such databases is in $100K-$1M range which is too steep for me :) Finally, we got several companies like Intel, Samsung, FusionIO, etc. that just started selling extremely fast yet affordable SSD hard drives based on SLC Flash technology. These drives are 100 times faster in random read/writes than the best spinning hard drives on the market (up to 50,000 random writes per second). Their seek time is pretty much zero, so the cost of random I/O is the same as sequential I/O, which is awesome for databases. These SSD drives cost around $10-$20 per gigabyte, and they are relatively small (64GB). So, there seems to be an opportunity to avoid the HUGE costs of scaling databases the traditional way by simply building a big enough RAID 5 array of SSD drives (which would cost only a few thousand dollars). Then we don't care if the database file is fragmented, and we can afford 100 times more disk writes per second without having to spread the database across 100 spindles. . Is anybody else interested in this? I've been testing a few SSD drives and can share my results. If anybody on this site has already solved their I/O bottleneck with SSDs, I would love to hear your war stories! PS. I know that there are plenty of expensive solutions out there that help with scalability, for example the time proven RAM-based SANs. I want to be clear that even $50K is too expensive for my project. I have to find a solution that costs no more than $10K and does not take much time to implement.

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >