Search Results

Search found 1961 results on 79 pages for 'ideal'.

Page 53/79 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Custom Extensions on Managed Chromebooks

    - by user417669
    I am a developer looking for the best way to set up different schools with their own custom, private extensions (ie School A should be the only one with access to Extension A). Theoretically, I am aware that there are a few ways to get a custom, private extension pushed out on a domain: Host the .crx on a server and click "Specify a Custom App" in the management console. Create a Domain App by uploading a zip to the Chrome Web Store Upload the extension from my developer account to the Chrome Web Store and publish to a single "trusted tester," or make it unlisted Option (1), hosting the .crx, has not been working. I am not sure why, but the extension is simply not pushing out. I link directly to the crx file, which has the right ID and MIME type, still, no dice. If anyone has any tips or suggestions for getting this to work, I would love to hear them! Option (2), having the school create a domain app, seems a bit inefficient because it requires all schools to upload their own zip. So essentially I would have to email a zip file to the school, and have them publish it. All updates to the extension will also require a similar process, so this doesn't seem ideal. I doubt that option (3) would work. If I published to the admin as a "trusted tester", I don't think that the other people in the domain would be able to access it. If it is unlisted, I do not know how an admin could find it in the Chrome Web Store dialog. Also, I would rather avoid security through obscurity. Has anyone had success with hosting the extension and using the Specify a Custom App feature? Any other suggestions for getting a Custom Extension pushed out by the management console? Thanks so much!

    Read the article

  • Road Warrior VPN Setup

    - by wobblycogs
    I apologise up front for the rather open ended nature of this question but I've got well out of my depth and could really do with some pointers. I need to set up a road warrior VPN solution which will allow our customers to securely access a number of services we provide for them. Customer machines will be running a variety of Windows versions from XP onwards with a variety of patch levels. Typically they will connect from the clients main offices but not always. It is safe to assume that all clients will be behind NATs but we may occasionally see a connection that isn't NAT'ed. Typical connection situation is therefore: Customer Laptop -- Router (NAT) -- Internet -- VPN Server + Firewall -- Server (Win 2008 R2, Non-routable IP) There will initially be a dozen or so people that could connect but that will grow quickly to around 100. It's unlikely that we'll see that many concurrent connections though, I imagine our total VPN throughput would be <50Mbps peak. What are my options for setting this up? I've been trying to set up a system like this using a MikroTik router for a few days but have struggled to get it working correctly, particularly with NAT'ed clients. I've had a quick look at OpenVPN and liked what I saw but I think it's unlikely our customers IT departments would allow the client to be installed. Finally I've looked at the Cisco ASA range but I'm on a fairly tight budget so this is less preferable but it looks like it would work pretty much out of the box. My fall back position is to connect the server directly and use the provided VPN + Firewall facilities but that is far from ideal as the number of servers is likely to grow over time.

    Read the article

  • Looking for ballpark pricing on an affordable a Cisco VOIP solution for our office

    - by guytech
    We have about 8 incoming PSTN lines that are currently on an old and antiquated Nortel Meridian ICS system. This system has been giving us some grief. We're looking for a new VOIP solution. I've been looking at a Cisco solution and it does seem pricey but I'm sure effective. Unfortunately, we probably can't afford a Cisco Unified Communications 520 which seems to be the ideal solution. We have about 15 people who need an extension and voicemail. We really don't have any need for a fancy system just an auto attendant of some sort when people call us. It looks like we'll have to get an older router and an addon card for what we're looking for to get best value pricing. However, I don't know a a lot about Cisco voice products so I'm a bit lost as to what to get. The only thing I am sure on is the pricing on VOIP phones which we expect to be about ~$100-200. However, I'm not sure what pieces of VOIP infrastructure to get. Any advice? I am familiar with Asterisk but right now I'm looking on pricing concerning a Cisco solution.

    Read the article

  • How to test a HTTPS URL with a given IP address

    - by GreatFire
    Let's say a website is load-balanced between several servers. I want to run a command to test whether it's working, such as curl DOMAIN.TLD. So, to isolate each IP address, I specify the IP manually. But many websites may be hosted on the server, so I still provide a host header, like this: curl IP_ADDRESS -H 'Host: DOMAIN.TLD'. In my understanding, these two commands create the exact same HTTP request. The only difference is that in the latter one I take out the DNS lookup part from cURL and do this manually (please correct me if I'm wrong). All well so far. But now I want to do the same for an HTTPS url. Again, I could test it like this curl https://DOMAIN.TLD. But I want to specify the IP manually, so I run curl https://IP_ADDRESS -H 'Host: DOMAIN.TLD'. Now I get a cURL error: curl: (51) SSL: certificate subject name 'DOMAIN.TLD' does not match target host name 'IP_ADDRESS'. I can of course get around this by telling cURL not to care about the certificate (the "-k" option) but it's not ideal. Is there a way to isolate the IP address being connected to from the host being certified by SSL?

    Read the article

  • Django apache + mod_wsgi with virtualenv

    - by ArgsKwargs
    I have some questions running multiple Django sites on a VPS I have a server that uses openPanel to automatically create VirtualHosts within apache2. My ideal situation is that I would have multiple virtualenvs with different dependencies installed so the python dist-packages directory isn't contaminated for different Django sites. For example: /home/user/virtualenv1 /home/user/virtualenv2 My django applications reside at /var/www, so For example: /var/www/djangosite1 /var/www/djangosite2 Now I've read upon openPanel docs and figured out the best thing todo is create a django.conf file inside the mydomain.com.inc folder, which looks something like: /etc/apache2/openpanel.d/mydomain.com.inc/django.conf DocumentRoot /var/www/djangosite1/project WSGIScriptAlias / /var/www/djangosite1/project/wsgi.py WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages <Directory /var/www/djangosite1/project> Order allow,deny Allow from all </Directory> Alias /static /var/www/djangosite1/project/static-root Now my problem is that this setup seems unable to find the virtualenv site-packages thus not recognizing any dependencies available in the given virtualenv Also, commenting out this line doesn't seem to break or change a thing: WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages For example: > service apache2 start ImportError: No module named South When I install South outside the virtualenv everything works

    Read the article

  • Anonymous file sharing without login window, from Windows 7 server to XP clients

    - by Niten
    I'm trying to provide machines on a small LAN with read-only, anonymous access to files shared from a Windows 7 workstation (let's call it WIN7SVR). In particular, I don't want clients to have to deal with a login window when they navigate to, e.g., \\WIN7SVR in Windows Explorer, but we do not have a domain and synchronizing accounts between the server and clients would be intractable. There are both Windows 7 and Windows XP clients that need access to these shares. I got this working for Windows 7 clients by just enabling the Guest account on WIN7SVR and setting appropriate share permissions. Other Windows 7 machines automatically try logging in as Guest, it seems, so their users don't have to deal with the login window. The problem is with the XP clients--they can access the server if the user enters "Guest" in the login window, but I don't want users to have to do that. So from what I gather, in my limited understanding of Windows file sharing, this boils down to granting null sessions access to file shares on WIN7SVR. But I've had no success so far on that front. I've tried all the following in the local group policy editor on the Windows 7 server: Set Network access: Let Everyone permissions apply to anonymous users to Enabled Set Network access: Restrict anonymous access to Named Pipes and Shares to Disabled Added the names of corresponding shares to Network access: Shares that can be accessed anonymously Added "ANONYMOUS LOGON" to Access this computer from the network under User Rights Assignment Any advice would be highly appreciated... I'm mostly a Unix guy, so I feel somewhat out of my league with Windows file sharing. I do understand that any sort of anonymous access to file shares isn't generally ideal from a security standpoint, but it's the most practical solution for us in this case, and access to our network is well enough controlled that share-level security isn't a concern.

    Read the article

  • Doing arithmetic and passing it to the next command

    - by neurolysis
    I know how to do this in /bin/sh, but I'm struggling a bit in Windows. I know you can do arithmetic on 32-bit signed integers with SET /a 2+2 4 But how do I pass this to the next command? For example, the process I want to perform is as follows. Consumer editions of Windows have no native automated sleep function (I believe?) -- the best way to perform a sleep is to use PING in association with the -n switch to get that many seconds, minus one, of sleep. The following command is effective for a silent sleep: PING localhost -n 3 > NUL But I want to alias this into a sleep command. I'd like to have it elegant so that you enter the actual number of seconds you want to sleep after the command, right now I can do DOSKEY SLEEP=PING 127.0.0.1 -n $1 > NUL Which works, but it's always 1 second less than your input, so if you wanted to sleep for one second you would have to use the command SLEEP 2. That's not exactly ideal. Is there some way for me to pass the arithmetic of $1+1 and pass it on to the next command in Windows? I assume there is some way of using STDOUT...

    Read the article

  • How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment [closed]

    - by lmttag
    Possible Duplicate: How to Deploy an ASP.NET Web API- and Browser-based Application to a Production Environment We have an ASP.NET Web API server that serves up a SQL Server data driven website. The API uses JSON to transfer data from SQL Server to the front end. We need to move it to an internal production environment (nothing will be exposed on the public Internet) and we’re having problems - or just not understanding what needs to be done. There are two domains: The corporate domain - where all users login normally. The process domain - contains the database the Web API needs to access. The IT staff wants to put a DMZ between the two domains to house the IIS app and shield the users on the corporate domain from having access into the process domain directly. The ideal configuration is: corp domain (end users) <–> firewall (open port 80) <–> DMZ (web server running IIS) <–> firewall (open port 80 or 1433????) <–> process domain (IIS for Web API and SQL Server) We don’t really understand how to deploy our browser/Web API application in this scenario. Do we need to break up our application so that all the client code is on the IIS server in the DMZ, while the Web API gets installed on the server in the process domain? Does the entire app (client code and Web API) stay together on the IIS server in the DMZ, which then somehow accesses the SQL Server instance to get data? From the IIS server and app in the DMZ, would you simply access the Web API on the server in the process domain by going to http://server/appname/api/getitmes? In the second firewall between the DMZ and the process domain, would you have to open port 1433 or just port 80 since the Web API is a HTTP endpoint? Or, is there some better way of deployment (i.e., how ASP.NET Web API single page applications written all in HTML5 and JavaScript supposed to be deployed to production environments?)? NB: The servers are Win2k8 R2, SQL Server 2k8 R2, and IIS 7.5.

    Read the article

  • What are the replacement options for an IDE hard disk for a DOS based system?

    - by dummzeuch
    I have got a few "embedded" systems running MSDOS 6.2 which boot from and store data to IDE hard disks. Since these drives are nearing their end of life, the question arises how we can replace them. The requirements are: DOS must be able to install and boot from these drives. They must be able to sustain heavy (mostly) write access. If possible, they should be able to survive moderate vibrations (not too bad since the current hard disks have survived several years of that) I considered the following options so far: other ide hard drives: Unfortunately modern IDE drives are too large so DOS cannot boot from them even if I create small partitions. Older IDE drives are just that: old, so they are probably not the most reliable ones any more. SSDs: There are a few SSDs with IDE interface available. I have not yet tried them. Does anybody have any experience with them? They look like the ideal replacement provided that DOS can boot from them and that writing speed does not deteriorate too much (the old hard disks are no race cars either). Compact Flash: There are adapters for using CF with IDE controllers and they work fine. DOS can boot from them and they have no problems at all with vibrations. What I am not sure about is their durability. DOS uses FAT so some very few sectors are written every time the medium is being written to. IDE to SATA converters: I have no idea whether they are any good. Has anybody tried them? It might be an option to use one of these to connect an SATA SSD to the system. Are there any alternatives that I have missed? (We are working on replacing these systems, but it will still take a few years.)

    Read the article

  • Network Security Device/Software

    - by Campo
    We currently run Symantec Antivirus Corporate 10.2. The software is really easy to manage on a network but the actual virus detection isn't bad but the malware detection is crap. We recently were infected with a email bot that got us put on some block lists. This has been resolved. I cannot have that happen again. I would like to find a program as easy to manage as symantec that I can install on all the user's workstations as well as the servers. We run a windows 2003 domain. We have a couple 2008 test servers in the environment. Most of the workstations are xp though I am using windows 7 and symantect is not compatible with this OS... So we need a solution that would cover all those operating systems. If it could be installed on macs too that would be a bonus though not necessary at all. This software must detect: Viruses AND Malware I am looking for something that combines the features in anti-malware programs like malwarebytes or spybot with an antivirus program like symantec or AVG. Alternatively if there is a piece of hardware that is a firewall, router, and packet inspection for virus/spam that would be the most ideal solution. I then could supplement with a piece of software that could pickup what the hardware misses. Thank you for your suggestions.

    Read the article

  • How can I get Windows 8 to automatically disable touch when I am using my Wacom pen and turn it back on when I am not

    - by Robert
    I have an HP convertible tablet computer which I just upgraded to Windows 8. The problem (which existed under Windows 7 as well) is that this tablet has both a capacitive touch screen (with multi-touch) AND a wacom-type tablet built in to the screen that works using electro-magnetic resonance with the provided stylus. My Use Case: Most of the time I am happy using my fingers and the touch interface for navigation and whatnot. However, when I want to get down to serious note-taking/drawing, I want to use the wacom functionality. The problem is that any comfortable writing position has me resting my arm/hand on the screen, which activates the touch technology (despite supposed palm-detection algorithms) and completely screws up my input paradigm. My Ideal Solution: Ideallly, since wacom technology senses when the pen is "close" to the screen, I would love to have touch be automatically disabled whenever the wacom pen is detected, and turned back on when it is out of range. this would allow me to seamless switch between the two input methods, and since I NEVER want to use both at once would work perfectly for me. An acceptable alternative: As a next best option, It would be great to be able to turn off the touch functionality (leaving the wacom in place) whenever I entered specific apps (e.g. OneNote, Photoshop, Gimp, Pencil, etc.) and then have it turn back on when I left that app.... As a worst case at least lets me use my PC option: If I could create a shortcut (tile or otherwise) that flips the touch on and off without going all the way through the nested computer settings, that would be better than nothing. Thanks in advance for the help with 1 or more of the above.

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • VirtualBox: using physical partition as virtual drive

    - by Hamman Samuel
    Background: I am using VirtualBox installed on Windows 7. From within VirtualBox I am using Xubuntu as a virtual OS. The reason I chose this approach is so that I don't have to keep turning off Windows and rebooting from Xubuntu every time I needed to switch OSes. And VirtualBox's seamless mode is pretty amazing to allow me see Xubuntu and Windows 7 all in one screen. Issue: Now I am thinking of a way to have Xubuntu more integrated into my system. By this I mean I want to have a physical partition for Xubuntu. But I want to still have the feeling of the seamless mode. Question: So finally, my question is: is it possible to load a partition in VirtualBox as a virtual OS? Case examples: Ideal scenario would be: I physically boot up and login to Windows 7. Now I want to access Xubuntu, so I load VirtualBox and access my Xubuntu partition without rebooting. And the other way around too, i.e. I boot up the system, login to Xubuntu, and can access the actual Windows 7 partition through VirtualBox. Other info: Please note that I am not talking about getting access to files, as I have a completely separate partition for my files, and am very familiar with VirtualBox's Shared Folders option.

    Read the article

  • Replace Linux Boot-Drive | ext3 to btrfs

    - by bardiir
    I've got a headless server running Debian Linux currently. Linux vault 3.2.0-3-686-pae #1 SMP Mon Jul 23 03:50:34 UTC 2012 i686 GNU/Linux The root filesystem is located on an ext3 partition on the main harddrive. My data is located on multiple harddrives that are bundled to a storage pool running with btrfs. UUID=072a7fce-bfea-46fa-923f-4fb0827ae428 / ext3 errors=remount-ro 0 1 UUID=b50965f1-a2e1-443f-876f-578b5f93cbf1 none swap sw 0 0 UUID=881e3ad9-31c4-4296-ae60-eae6c98ea45f none swap sw 0 0 UUID=30d8ae34-e2f0-44b4-bbcc-22d761a128f6 /data btrfs defaults,compress,autodefrag 0 0 What I'd like to do is to place / into the btrfs pool too. The ideal solution would provide the flexibility to boot from any disk in the system alike, so if the main drive fails I'd just need to swap another one into the main slot and it would be bootable like the main one. My main problem is, everything I do needs to result in a bootable system that is open to ssh logins via network as this server is 100% headless so there is no possibility to boot it from a live cd or anything like that. So I'd like to be extra sure everything works out fine :) How would I best go about this? Can anybody hint me to guides or whip something up for these tasks? Anything I forgot to think about? Copy root-data into btrfs pool, adjust mountpoints,... Adjust GRUB to boot from btrfs pool UUID or the local device where GRUB is installed Sync GRUB to all harddrives so every drive is equally bootable (is this even possible without destroying the btrfs partitions on the drives or would I need to disconnect the drives, install grub on them and then connect them back with a slightly smaller partition?)

    Read the article

  • dom0 enable IPv6 for guests

    - by user98651
    I am looking at deploying IPv6 to my virtual machines. Right now I have v6 working great on the dom0 using a 6in4 provided by Hurricane Electric as I do not have native v6. However, I would like to distribute some of the /48 I am receiving to the domUs (/64 per machine would be ideal, but I am open to your suggestions). Static configuration on the domU side is fine. All I want to accomplish is getting the traffic to pass through the dom0 to the domU. To say the least, I'm still trying to wrap my head around all the virtual interfaces and bridges Xen creates. Yes, I have Googled around for this a bit and have not found anything great. I tried using two "vif-route6" bash scripts with no luck (possibly due to my ignorance with Xen networking). I am still stuck (mainly in how to configure the dom0). I would like to imagine this problem is relatively easy to solve and I look forward to your suggestions and help! Edited post to clarify my end goal: getting IPv6 to domU guests. I am completely open to suggestions but am hoping for something other than setting up a tunnel for every guest.

    Read the article

  • configure a Macbook Pro to use external monitor at boot (Debian Linux)

    - by Eric
    In the spirit of reuse, I've installed Debian (version 6.0.5 "squeeze") on my wife’s old Macbook Pro (circa 2009 or so), to repurpose it for various tasks. The catch is the display is flaky. It will last a random amount of time, between 2 minutes and 2 hours, before freezing and graying out. This is a known issue with that generation of MBP. Fortunately it’s no problem for me, as I plan to use it with an external monitor anyway. Which brings us to the problem: How do I configure this thing to output to the external display by default, and hopefully disable the built-in LCD? The ideal solution would be to modify a setting in the EFI (BIOS), but I’m not holding out much hope for that. Next best thing would be a kernel option I can pass to the NVIDIA driver. What won’t work is a solution that doesn’t give me a display until X starts. I need to have console access, especially given that the built-in LCD is dying, and any day now might give out completely. So far I haven’t been able to find anything online. lspci says I’ve got an NVIDIA GeForce 9400M Help is much appreciated! Eric PS if this question is better suited to the Unix & Linux area, pls advise and I will move it.

    Read the article

  • virtual machines, dual booting and data disks on SSD

    - by stevemarvell
    This is in planning, so if I've got the strategy wrong, please let me know. There are multiple questions here, but I think they all degenerate to the same answers. The hardware is a laptop with a single SSD. I'm trying to not lose the performance of the SSD. I plan a native dual booting Windows (plus cygwin) and Linux machine which is my BYOD and represents the development environment. I keep the codebase on a shared partition (though sometimes this is an external thunderbolt SSD) which can be natively "mounted" by whichever OS is in operation. I boot into one or the other environments depending on the task in hand. Sometime I have to develop with windows tools, but generally, Linux is my preferred development environment. It would be ideal if I could VM the other OS and run either in either. I'm going to assume, because I've not found a sensible VM based solution, that I have get samba involved to share the code partition between VMs. Is this going to blow my SSD performance in the VM? The client also supplies me with a VM for the target environment, usually linux. This is not often suited to development and is used for testing only. I normally keep two copies of this, one as a sandbox and one which I deploy to using the client's preferred method. I keep these VM snapshots on the shared partition. The latter is interacted with over the network and so has no disk sharing requirements. However, it would be useful for the sandbox to be able to "mount" the code base from the natively running OS. Is this samba or nfs again, depending on the native OS? Am I missing a trick which allows this to all work smoothly with all four environments running at once without loosing the SSD performance?

    Read the article

  • Allowing access to company files accross the internet

    - by Renaud Bompuis
    The premise I've been tasked with finding a solution to the following scenario: our main file server is a Linux machine. on the LAN, users simply access the files using SMB. each user has an account on the file server and his/her own access rights. user accounts are simple passwd/group security accounts, not NIS/LDAP. The problem We want to give users (or at least some of them, say if they belong to a particular group) the ability to access the files from the Internet while travelling. Ideally I'd like a seamless solution. Maybe something that allows the user to access a mapped drive would be ideal. A web-oriented solution is also good but it should present files in a way that is familiar to users, in an explorer-like fashion for instance. Security is a must of course, and users would be expected to log-in. The connection to the server should also be encrypted. Anyone has some pointers to neat solutions? Any experiences? Edit The client machines are Windows only.

    Read the article

  • Print over remote CUPS server, but just show a subset of the printers.

    - by jdm
    I'd like to print from my Ubuntu laptop (karmic) to some networked printers. Our organisation uses a CUPS server with several hundred printers. What I know I can do is: CUPS_SERVER=printers.company.com acroread document.pdf and then Adobe Reader shows me all available printers to select from. However, it takes a couple of minutes to display the large list, which is really annoying. (The desktop PCs here suffer from this, too.) The other option is to add a new printer with an address like ipp://printers.company.com/printer/bldg1_hp8150 (to the Ubuntu printer configuration = local CUPS server). However, it asks me for a driver. I don't want to / can't always specify a driver, since some printers don't appear in the list. I'd like to let the remote CUPS server handle the driver part (like it does when i set CUPS_SERVER), and do no more preprocessing/"driver stuff" on my side. The ideal thing would be if I could somehow add the remote printer list to my local cups server, and apply a filter, so that it would just display printers a la bldg1_*. This feature was available in KDE3.?, but I can't find something similar in Ubuntu/Gnome. Any suggestions?

    Read the article

  • What format should I convert my iTunes Videos to?

    - by Aequitarum Custos
    Recently got Daniusoft Media Converter, and was about to strip the DRM off my iTunes library, then I found I had a huge decision to make. Converting my music seems obvious, to MP3, however the video formats, I'm at a loss on. It has "optimized" formats for all types of video players such as iPhone, Zune, Droid (which I own), and just a huge list! I bought several seasons of House and a few movies, though I don't know which format would be optimal to not lose quality. While I love my Droid, I have no intention on watching full episodes or movies on it, and I have a feeling that would reduce quality. One of the options sections is HD Video, and it has HD MPEG-4 Video (*.mp4). That one seems like it would be the best format to convert to without losing quality, but again, I'm unsure. Has anyone used this software, or have general expertise in video formats to give a recommendation on what file format I should convert my videos to? Format requirements: No loss in quality Portability would be ideal (as in, will play on my Windows and potentially MAC, and streamable to a bigscreen).

    Read the article

  • What Device/System to use as a "router on a stick"

    - by Jeff Leyser
    I need to create several distinct VLANs, and provide a way for traffic to move between them. A "router on a stick" approach seems ideal: Internet | Router with Trunking Capability ("router on a stick") * * Trunk between router and switch * Switch with Trunking Capability | | | | | | | | | | | LAN 2 | LAN 4 | | 10.0.2.0/24 | 10.0.4.0/24 | | | | LAN 1 LAN 3 LAN 5 10.0.1.0/24 10.0.3.0/24 10.0.5.0/24 We have trunk-capable Layer-2 switches. The question is what to use as the router on a stick. My choices seem to be: 1) Use an existing Cisco 5505 ASA firewall. It appears the ASA can do the routing, but it's a 100Mbps device, and so seems sub-optimal at best 2) Buy a router. This seems overkill. 3) Buy a Layer-3 switch. Also seems overkill. 4) Use an existing Linux Box as a router 5) Use a new Linux box as a router' 6) Something I'm not thinking of I think either (4) or (5) is my best option, but I'm not sure how to choose between them. I expect the amount of traffic that has to cross the VLANs to be somewhat small, but bursty. How much load does routing add to a CentOS machine?

    Read the article

  • MySQL config for 2GB ram

    - by Tiffany Walker
    How is my config? Does it work well for 2GB? What would be an ideal config for a 2GB ram server? [mysqld] set-variable = max_connections=500 log-slow-queries safe-show-database local-infile=0 skip-networking symbolic-links=0 max_connections = 500 key_buffer = 256M myisam_sort_buffer_size = 64M join_buffer_size = 2M read_buffer_size = 2M sort_buffer_size = 2M read_rnd_buffer_size = 2M thread_concurrency = 16 table_cache = 1024 thread_cache_size = 50 wait_timeout = 7200 connect_timeout = 10 tmp_table_size = 32M max_allowed_packet = 160M max_connect_errors = 10 query_cache_limit = 1M query_cache_size = 32M query_cache_type = 1 [mysqld_safe] open_files_limit = 8192 [mysqldump] max_allowed_packet = 16M [myisamchk] key_buffer = 64M sort_buffer = 64M read_buffer = 16M write_buffer = 16M UPDATE 2012-03-28 12:58 EDT By RolandoMySQLDBA Please run these queries and paste them into your question: For MyISAM SELECT CONCAT(ROUND(KBS/POWER(1024, IF(PowerOf1024<0,0,IF(PowerOf1024>3,0,PowerOf1024)))+0.4999), SUBSTR(' KMG',IF(PowerOf1024<0,0, IF(PowerOf1024>3,0,PowerOf1024))+1,1)) recommended_key_buffer_size FROM (SELECT LEAST(POWER(2,32),KBS1) KBS FROM (SELECT SUM(index_length) KBS1 FROM information_schema.tables WHERE engine='MyISAM' AND table_schema NOT IN ('information_schema','mysql')) AA ) A, (SELECT 2 PowerOf1024) B; For InnoDB SELECT CONCAT(ROUND(KBS/POWER(1024, IF(PowerOf1024<0,0,IF(PowerOf1024>3,0,PowerOf1024)))+0.49999), SUBSTR(' KMG',IF(PowerOf1024<0,0, IF(PowerOf1024>3,0,PowerOf1024))+1,1)) recommended_innodb_buffer_pool_size FROM (SELECT SUM(data_length+index_length) KBS FROM information_schema.tables WHERE engine='InnoDB') A, (SELECT 2 PowerOf1024) B;

    Read the article

  • Enter network credentials as part of batch script

    - by Michael
    WinXP: I have several system services that are needed to run some machinery in my lab. The machine these services are running uses a lab login that has administrator rights. Our IS department, unfortunately, has it set up where at some point during the night the login "loses" the privilege level to start/stop these services. The account stays logged in, but the software controlling my hardware becomes unresponsive. In order to get things back up and running, I have to stop the system services and restart them. Because of the security settings, however, I have to re-enter the user password to start the service (even though the user was never logged out). That, I get the "This service cannot be started due to a logon failure" and I have to enter the password. What would be ideal is to have a batch script run before anyone gets into work that stops all of the necessary services, enters the user credentials when prompted, and then restarts them so that everything is ready for first shift to run. I assumed that using the Task Scheduler in Windows would work as it allows you to run batch files with a user's name and password, but this didn't seem to do the trick. With this setup I would arrive to find that all the services are stopped but not started again. (Presumably because the authentication failed.) The batch file is about as simple as it gets, all I have is: net stop "Service1" net stop "Serivce2" etc., then restart in reverse order based on dependency: net start "Service2" net start "Serivce1" What would it take to accomplish what I'm trying to do and restart the services?

    Read the article

  • Upgrade to Q9550 or i7 920 on a budget?

    - by evan
    I'm planning to upgrade my computer and torn between maxing out the system I have or investing in the X58 architecture. I'm currently using a E6600 Core 2 Duo with 4GB of RAM (800mhz) on an Asus PK5-E motherboard which I built two years ago. My original plan was that one day I'd upgrade machine to 8GB (1066mhz, the max the PK5-E allows) and to the Core 2 QuadQ9550 to give the machine a good four years of life. However, that was before the i7 came out. I use my computer mainly for software development , which I do inside Virtual Machines, and the i7 seems ideal for that because it no longer is limited by the speed of the FSB? And when I looked into it, getting 8GB DDR3 RAM isn't much more expensive than the 8GB of DDR2 and the i7 920 is comparable in price to the Q9550, which doesn't make much sense to me? So the question is it worth swapping the motherboard out for around $250 and upgrading all three components or using that money on SSD or 10rpm drive for the existing system's OS/Apps/Virtual Machine drive? Or just put the $250 towards a completely new machine in a year or two? Would the i7 really give that much of boost compared to the Q9550 for what I'd be using it for? Thanks in advance for your input!!!

    Read the article

  • Are there any tools to migrate your files, applications, and settings to a new Windows computer?

    - by calbar
    I've decided to upgrade my laptop on a regular basis and one of my main concerns is recreating my entire Windows 7 environment every time I do this. I'm talking toolbar positions, login settings, start menu items, applications and all their customizations... everything but my drivers. It literally takes weeks to fully recreate my working environment, not to mention the risk of user error or just simply forgetting "how I liked it." I'm assuming I won't find something as painless as Apple's Migration Assistant for Windows, but maybe there's something out there that can at least package up your apps and their settings? Bonus points if you can point it to your personal files, too - whatever's the quickest way to get from one machine to the next. I intend to install Windows fresh to remove bloatware on every machine that I buy, then selectively install the drivers I need. Something that accommodates loading my old apps into this newly prepared environment would be ideal. One random point of concern is in regard to application settings that refer to old hardware. I'm not sure if there's anything that can be done about this. If you have any thoughts, feel free to share. Thanks for your help!

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >