Search Results

Search found 16560 results on 663 pages for 'high tech resources'.

Page 225/663 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • Outlook Calendar Attachments to have limited access to just Required attendees

    - by Jason Pearce
    The management team at my company often attaches documents (Word, Excel, PDFs) to their Outlook Calendar meeting requests. The meeting requests are sent to the managers, but also to their assistants. The desire is to have everyone be able to view the full meeting request and its content, but limit the ability to open the attachments to just the managers. Is there a way in Outlook 2003 and/or 2007 to limit access to attachments that accompany meeting requests? Ideally, can access to the attachments be controlled by the "Select Attendees and Resources" window when selecting individuals from the Global Address List. Can those in the Required field have access to the attachments while those in the Optional or Resources fields not have access? My suggestion was to simply place all meeting attachments in a shared network folder that has read/write access limited to managers. They would then just place fully qualified links to those files in the body of the Meeting Request. While everyone would receive and see the links, only a few would have access. This, however, wasn't easy enough for them, so I'm looking for some other ideas.

    Read the article

  • Cache Control Headers with IIS 7.5

    - by Brad
    I'm trying to wrap my head around client side (web browser) caching and how it works in relation to IIS 7.5 cache control headers. In particular: If we want to force clients to reload cached resources, how must IIS be configured? Do we need to set expire web content immediately if the resources on the server have a more recent Modified Date (or ETag value)? Right now we're not setting any cache headers. So if I set a cache header of no-cache (which I think is the equivalent of expire web content immediately) will that force the web browser to obtain a new version of a particular file. Or will the browser only request a new version after it deems its current copy to be stale and then from that point forward not cache it? Would a best practice be to set a cache control flag of 1 week, then 8 days before I know I am going to make a change set the cache control down to for instance 30 minutes? But if I do that and then need to immediately expire an item from users caches because there was an issue with it how do I do that?

    Read the article

  • Serious 64-bit laptop

    - by Daniel Gehriger
    For the past couple of years, I have been using an IBM Thinkpad T60p for daily work (software development, desktop & embedded). I am extremely satisfied with this machine, due to its robustness. It also has a few features I depend on: a high resolution display: 15.0" TFT FlexView display with 1600x1200 (UXGA); excellent keyboard; decent graphics and CPU performance. Some of the software I develop benefits from larger amounts of RAM, and 3GB (Windows 7 32-bit) or 4GB (Windows 7 64-bit on T60p) are no longer sufficient. My customers run desktop computers with 20GB and more, and I need to have at least 8GB to at least be able to run reasonable test cases. So I'm shopping around for a new laptop, but I'm struggling to find anything that matches my requirements: must run Windows 7 64-bit Pro or higher; must support at least 8GB of RAM (more is better) high screen resolution! While I prefer 4:3 I can live with wide screen. But I really hope to find something with a vertical screen resolution similar to what I have now... portable, so < 16" but = 14" I realize that FlexView isn't available anymore, but I'd like to avoid a glossy screen if possible. decent (not more) graphics performance, ideally hybrid (I'm doing a lot of CAD, never games). good keyboard reasonable CPU -- but I'm still fine with my current Core 2 Duo, so that shouldn't be too complicated. The T60p fits all those requirements, except the 8GB of RAM. Can you help me find a current notebook that would match most of them? I don't mind changing brand. Thanks!

    Read the article

  • What is a suitable simple, open web server for Windows?

    - by alficles
    I'm looking for a dead simple web server for Windows. Load will not be high as it will be primarily serving binaries for a WPKG update service. It needs to serve the entire contents of a single folder over HTTP on a configurable (high) port. No CGI or other scripting is required, but it might be nice for future features. I started with Mongoose, since it doesn't even have an installation requirement (a very nice perk), but it fails to start when run as a service. (Technically, it acts as it's own installer.) I've investigated LighTPD as well, but it appears to be minimally (at best) tested on Windows. And naturally, I'm looking for something free. As in beer is good, but speech is better, as always. Edit: I didn't mention this initially, but non-tech people will be doing the install. They'll have whatever script I write for the install, but the goal is a simple system that is easy to troubleshoot. (I almost worded this question "What is the best...", but Serverfault rightly observed that that is a subjective question. And it's really not an optimization problem, any suitable solution will work. I just can't seem to find one for Windows.)

    Read the article

  • HP/IBM alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/IBM? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • Why does apache httpd tell me that my name-based virtualhosts only works with SNI enabled browers (RFC 4366)

    - by Arlukin
    Why does apache give me this error message in my logs? Is it a false positive? [warn] Init: Name-based SSL virtual hosts only work for clients with TLS server name indication support (RFC 4366) I have recently upgraded from Centos 5.7 to 6.3, and by that to a newer httpd version. I have always made my ssl virtualhost configurations like below. Where all domains that share the same certificate (mostly/always wildcard certs) share the same ip. But never got this error message before (or have I, maybe I haven't looked to enough in my logs?) From what I have learned this should work without SNI (Server Name Indication) Here is relevant parts of my httpd.conf file. Without this VirtualHost I don't get the error message. NameVirtualHost 10.101.0.135:443 <VirtualHost 10.101.0.135:443> ServerName sub1.domain.com SSLEngine on SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNull:!EDH:!DH:!ADH:!eNull:!LOW:!EXP:RC4+RSA+SHA1:+HIGH:+MEDIUM SSLCertificateFile /opt/RootLive/etc/ssl/ssl.crt/wild.fareoffice.com.crt SSLCertificateKeyFile /opt/RootLive/etc/ssl/ssl.key/wild.fareoffice.com.key SSLCertificateChainFile /opt/RootLive/etc/ssl/ca/geotrust-ca.pem </VirtualHost> <VirtualHost 10.101.0.135:443> ServerName sub2.domain.com SSLEngine on SSLProtocol -all +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNull:!EDH:!DH:!ADH:!eNull:!LOW:!EXP:RC4+RSA+SHA1:+HIGH:+MEDIUM SSLCertificateFile /opt/RootLive/etc/ssl/ssl.crt/wild.fareoffice.com.crt SSLCertificateKeyFile /opt/RootLive/etc/ssl/ssl.key/wild.fareoffice.com.key SSLCertificateChainFile /opt/RootLive/etc/ssl/ca/geotrust-ca.pem </VirtualHost>

    Read the article

  • At what point does the performance gap between GPU & CPU become so great that the CPU is holding back a system?

    - by Matthew Galloway
    I know that generally speaking for gaming performance the GPU is the primary factor which holds back performance, with everything else such as RAM/motherboard/PSU/CPU being secondary in importance to the graphics card. But at some point the other components ARE going to be significant in holding back the whole system! For instance nobody would be silly enough to play modern games with 512MB RAM and the very latest graphics cards (such as an HD7970) as I bet the performance increase over such a system with only 512MB but a mid range card would be non-existent! Thus it would be a "waste" for such a person to buy any high end graphics card without resolving first the system's other problems. The same point applies to other components, such as if it only had a Pentium II a current high end graphics card would be wasted on it! So my core question is how do you determine at what point for your system is spending on extra GPU power be completely "wasted"? (also, a slightly more nuanced question is trying work out at what point might the extra graphics power not be "wasted" but would be "sub optimal" value for money, when the expenditure should then be split around graphics card and other components. As obviously a gamer shouldn't always just spend on upgrading the graphics card! But needs to balance it out)

    Read the article

  • Network topology for both direct and routed traffic between two nodes

    - by IndigoFire
    Despite it's small size, this is the most difficult network design problem I've faced. There are three nodes in this network: PC running Windows XP with an internal WiFi adapter.Base station with both WiFi and a Wireless Modem (WiModem)Mobile device with both WiFi and WiModem The modem is a low-bandwidth but high-reliability connection. We'd like to use WiFi for high-bandwidth stuff like file transfers when the mobile is nearby, and the modem for control information. Here's the tricky part: we'd like the wifi traffic to go directly from the mobile to the PC, as rebroadcasting packets on the same WiFi channel takes up double the bandwidth. We can do that with a manual configuration by giving the both the PC and the base station two IP addresses for their WiFi interfaces: one on a subnet shared with the mobile, and one on their own subnet. The routes on the PC are set up so that any traffic going to the mobile via WiModem goes through the secondary IP address so that return traffic from the mobile also goes through the WiModem. Here's what that looks like: PC WiFi 1: 192.168.2.10/24 WiFi 2: 192.168.3.10/24 Default route: 192.168.2.1 Base Station WiFi 1: 192.168.2.1/24 WiFi 2: 192.168.3.1/24 WiModem: 192.168.4.1/24 Mobile WiFi: 192.168.3.20/24 WiModem: 192.168.4.20/24 We'd like to move to having the base station automatically configure the mobile and PC, as the manual setup is problematic when you start having multiple mobiles and PCs. This means that the PC can only have 1 IP address and needs to be treated as being pretty simple. Is it possible to have a setup driven by DHCP on the base station that is efficient with bandwidth?

    Read the article

  • How can I take browser screenshots at a higher resolution than my browser supports?

    - by user53575
    I need to take a screenshot of a website as it would appear on a very high resolution monitor... say 16000x12800 pixels. My laptop's screen has a native resolution of 1280x800. Basically, I need to simulate having a monitor resolution much higher than my monitor and video card actually supports. I want the screenshot of the site to look pretty much how it does when you hit CTRL MINUS (zoom out) in Firefox repeatedly, but without any loss of pixels due to scaling. How can I do this? Is there some way to use virtual machine software to simulate a super-high-res display? If not, is there some way to open a browser window bigger than the screen, and then capture its contents as a PNG somehow? Anything else that might work? Here was an answer: http://superuser.com/questions/120266/how-can-i-take-browser-screenshots-at-a-higher-resolution-than-my-browser-support But it doesn't work. Firefox remains in the resolution of the physical screen. The window blinks and shrinks back to normal resolution. Please Help!!

    Read the article

  • How to handle OpenVPN client as a service, when the laptop is physically on the network already?

    - by James
    The Setup I've gotten OpenVPN working on our Windows XP laptops. Users are limited, so I went ahead and set OpenVPN client to run as a service, which is great anyway because that means they are on the VPN before logging in, so login scripts work, plus we can do remote support even if the user can not log in (such as connecting via VNC or resetting passwords). It is also configured to send all traffic over the tunnel, so when, for example, they browse the internet it is just like browsing from our corporate network. The Qestion(s) So, I'm wondering how does the OpenVPN client act when the computer is already physically on the same network as the OpenVPN server? Right now, the client is configured to connect the the public dns name which will resolve to the public ip address which will NOT get reflected back to the OpenVPN server, so it is affectively blocked from connecting to the OpenVPN server while on the network. Is that a good thing? Or will it constantly try to connect, using up system resources and network resources? We will likely have hundreds of laptops regularly on the physical network with this, so it could contribute to a lot of unnecessary network chatter. Alternatively Would it be better to have the firewall reflect the port back to the OpenVPN server and let it connect? Or have our internal dns resolve the name to the private ip and allow them to connect directly? Would traffic then go over the vpn connection (which I do not want, when already on the physical network)? Or is it possible to tell it to ignore the connection when the client and server are already on the same network? TLDR What's a sane way of handling OpenVPN client running as an always-on service when the client and server will often be on the same network?

    Read the article

  • What is the max connections via remote desktop for a small server?

    - by Jay Wen
    I have a small server running MS Server 2012. The CPU is a Xeon E3-1230 V2 @ 3.30GHz, 4 Cores, 8 Logical Processors, 8 GB RAM. Main HD is a Samsung 840, and the big storage is a 4 disk WD Black Raid 10 Array in a Synology NAS enclusure. My question is: given this hardware, approximately how many users can the system support via "Remote Desktop Connection"? Assume there are no licensing limits. These are not admin users. I know there is a two admin limit. This boils down to: What resources does one remote connection require? RAM? % of the CPU? Networking bandwidth? I guess the base case would be for a conection where the user is inactive or simply browsing cnn. Once you know this, you know how many you could fit on the machine before something is maxed-out. In reality, users would be mostly on Excel (multi-MB spreadsheets). I know the approx. resources currently required by each copy of Excel.

    Read the article

  • Re-packaging commercial software into RPM packages

    - by gac
    The situation is this - I have a small CentOS 5 "cluster" (currently 7 machines, but potential for more) which run a commercially available software package that's distributed essentially in tarball format (it's actually a zip file with a mixture of Windows/Linux binaries and an installation shell script with no potential for automation). I'd like to re-package this somehow into an RPM package (ideally that I can throw onto a self-hosted yum repository) in order to keep these "cluster" machines both up to date and consistent. I could do 7 manual installations, but there's scope for error. As I understand it, I'll need to accomplish the following tasks: add a non-privileged user to the target system for running the daemon without unnecessary root privileges package the binary files themselves up from the final installation location on a separate build machine (probably under /opt/package for sanity's sake). No source is available. add a firewall hole in order for the end-users to be able to communicate with the "cluster" nodes add a cron task which can start the daemon on @reboot I'm coming up with plenty of good packaging resources so far, but all are based on the traditional method (i.e. if I were the vendor packaging up my source files), rather than re-packaging a ton of binary files from an already-installed instance of the application, which is the only option available to me. Anyone have any good resources they can share for achieving this goal? Thanks!

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • Is there a way to use VirtualBox without using it's resource registry?

    - by Catskul
    Summary VirtualBox seems to want everything to be "registered" which makes it much more annoying to work with on the command line. I'm attempting to create an automated script which will create, move, start, stop, and destroy virtual machines and virtual disks. Requiring registration will complicate the task for the following reasons. leaves state information around that can cause unpredicted edgecases causing scripts to fail. creates potential name space collisions for multiple process creating VMs with the same name moving/copying resources on the same machine is more complicated because references in the registry need to be updated copying resources (disk + vm combination) to another machine require reconfiguration once they reach their target machine, and require the transfer of extra meta data to do the reconfiguration. If something unexpectedly fails, and an unregister thus fails to happen, left over configuration information can cause problems in subsequent runs. Use Case My specific use case is for a continuous integration server which creates and destroys VMs and Disk images potentially with the same name, and would require more logic to deal with the registry's statefulness. Imaginary Example It seems that I should just be able to for example (using some imaginary and/or incorrect commands): mkdir foobar customdiskimg_script ./foo/foo.vdi vboxmanage createvm --name "foo" --ostype Linux --basefolder ./foo/foo.xml vboxmanage storagectl ./foo/foo.xml --name foo --add ide vboxmanage storageattach --storagectl foo --medium ./foo/foo.vdi ./foo/foo.xml vboxmanage startvm ./foo/foo.xml TLDR Is there a way to use virtualbox without "registering" harddisks and VMs?

    Read the article

  • Reverse web proxy with time constraints

    - by user2893458
    I have a web application which produces several unique URLs of the type http://service.company.com/service.html?type=aaaa&key=jfiZm6u6cW where the last part is a randomly generated key. Each such URL provides access to an instance of the service provided. I am looking for a way to restrict access to those URLs based on time constraints, as an example URL#1 should be available between 8:00AM and 10:00AM on May 30, URL#2 should be available between 10:30AM and 12:00PM on May 31, and so on. I already have a resource scheduling application based on Drupal and would like to find a way to include those URLs as scheduled resources. The web application is deployed on Apache Tomcat, so I don't have the knowledge or the resources to alter it, therefore I thought that I could put some sort of reverse proxy in front of the web app that could implement the time constraint feature. In my thoughts the reverse proxy would allow or disallow access to each URL based on the rules that my scheduling application would provide. There may be other ways to deliver such a solution, but I can't think of anything better, so the question is: is there a reverse web proxy architecture that could allow access to the destination URLs based on time and date rules? Any other ideas are more than welcome.

    Read the article

  • Which components should I invest in.. for a backup machine.

    - by Senthil
    I am a freelance developer. I have a PC, a laptop and an old testing and file server machine. I might add one or two in future. I want to have an on-site backup machine that can handle backups of ALL these machines - file backups, MySQL backups, backup of subversion repository, etc.. When building the machine, which components should I invest more in? Examples: The cabinet should have lots of room for expansion. Hard disk size should be large. But I guess hard disk speed need not be high (?) But other components like, RAM, PSU, Processor, Network card, Cooling, etc.. how much relative importance do these have in a backup machine? Which of these components should be high-end or large, and which ones need not be? Some Idea of the load: There will TBs of data. File backups and subversion repository backups will at least be done daily. MySQL backups done weekly. assume 3 machines at the moment and somewhere around 10 machines in the future.

    Read the article

  • disable specific PCI device at boot

    - by Rhymoid
    I've just reinstalled Debian on my Sony VAIO laptop, and my dmesg and virtual consoles all get spammed with the same messages over and over again. [ 59.662381] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 59.901732] usb 1-1.2: new high-speed USB device number 91 using ehci_hcd [ 59.917940] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 60.157256] usb 1-1.2: new high-speed USB device number 92 using ehci_hcd I believe these messages are coming from an internally connected USB device, most likely the webcam (since that's the only thing that doesn't work). The only way I can seem to have it shut up (without killing my actually useful USB ports) is to disable one of the USB host controllers: # echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci_hcd/unbind This also takes down my Bluetooth interface, but I'm fine with that. I would like this setting to persist, so that I can painlessly use my virtual console again in case I need it. I want my operating system (Debian amd64) to never wake it up, but I don't know how to do this. I've tried to blacklist the module alias for the PCI device, but it seems to be ignored: $ cat /sys/bus/pci/devices/0000\:00\:1a.0/modalias pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 $ cat /etc/modprobe.d/blacklist blacklist pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 How do I ensure that this specific PCI device is never automatically activated, without disabling its driver altogether? -edit- The module was renamed recently, now the following works from userland: echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci-pci/unbind Still, I'm looking for a way to stop the kernel from binding that device in the first place.

    Read the article

  • [SOLVED} How do I restore my audio after uninstalling Ventrilo?

    - by Marcx
    Hi, I've a Dell studio 1555 bought on september with Windows 7 64bit Professional on it. The audio device works proprerly, while listening to audio contents (from disk or internet) When I use Ventrilo, the audio from other people sounds good and I hear their voices clearly When I use any other VOIP programs like Teamspeak 3, MSN or Skype, I hear a disturbed voice, and it's impossible to comprehend something... Anyway everything worked fine until I installed Ventrilo, but removing it didn´t solve my problem. Update: Here's a sample of how I hear others people voices.. Audio Sample After some tests, also the desktop has the same problem. (I tried TeamSpeak3) Here are some details on my laptop and desktop Laptop Dell Studio 1555 Core 2 Duo P8600 2.4Ghz 4Gb Ram Dual Channel Ati HD 4570 512Mb dedicated (up to 2048) IDT High Definition Audio Desktop Motherboard Asus P5KPL-AM Dual Core CPU E5200 2.50Ghz 2x2GB PC6400 Dual Channel Ati Radeon HD 4650 512MB VIA High Definition Audio Both computers have Windows 7 Professional 64Bit. So how do I restore my audio? SOLVED The problem was in router firmware, there was a bug that recognized VoIP traffic as a DOS attack and the router grambled every packet... I've installed the newest firmware and everything is fine :)

    Read the article

  • How can I get DVDs playing after a Vista to XP change?

    - by Liath
    I replaced my vista install on a Dell Inspiron 1525 with XP and have managed to get most things up and running again however I'm having trouble with playing DVDs. When I try and play a DVD I get the following message: Windows Media Player cannot play this DVD because there is a problem with digital copy protection between your DVD drive, decoder, and video card. Try installing an updated driver for your video card. I have ensured that my drive is configured to play Region 2 discs (I'm in the UK), I've installed the most up to date XP codec pack which makes me think it's a driver issue. In device manager I have got my DVD drivers up to date however under "Other Devices" I'm missing several which sound key: Audio Device on High Definition Audio Bus Modem Device on High Definition Audio Bus Video Controller Video Controller (VGA Compatible) However I've installed all the relevant drivers I can find on the Dell website. The drive itself is working - I've run software from the drive. I'm afraid I am far from a sys-admin so I'm struggling on this one. How can I get my DVDs playing again?

    Read the article

  • ec2 spot instance for daily processing task

    - by chaft
    I don't have much experience as a sysadmin or with amazon aws, so I hope someone can explain in simple terms or refer me to a good guide on how to achieve the below. I have a system running on ec2 and amazon rds getting data in and saving it to the db. I need to run a script once a day (at the end of the day) to process all that data and prepare a daily report. This process will take approximately an hour to run. It needs to run on a high memory instance.. From what i've read so far, I guess the best way to do it is to have a high memory spot instance run every day, set it up to execute the script on startup and and shut down when done. Is that the right way to do it? If so, how to do it? how to tell the spot instance to run every day? through a cron job on the other server or is there a better way? How to set it up to run the script on startup? through cloudinit? Any help would be appreciated. One last thing, the job is not very time sensitive as long as it runs every day.. thanks

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • How can I get DVDs playing after a Vista to XP change? [closed]

    - by Liath
    I replaced my vista install on a Dell Inspiron 1525 with XP and have managed to get most things up and running again however I'm having trouble with playing DVDs. When I try and play a DVD I get the following message: Windows Media Player cannot play this DVD because there is a problem with digital copy protection between your DVD drive, decoder, and video card. Try installing an updated driver for your video card. I have ensured that my drive is configured to play Region 2 discs (I'm in the UK), I've installed the most up to date XP codec pack which makes me think it's a driver issue. In device manager I have got my DVD drivers up to date however under "Other Devices" I'm missing several which sound key: Audio Device on High Definition Audio Bus Modem Device on High Definition Audio Bus Video Controller Video Controller (VGA Compatible) However I've installed all the relevant drivers I can find on the Dell website. The drive itself is working - I've run software from the drive. I'm afraid I am far from a sys-admin so I'm struggling on this one. How can I get my DVDs playing again?

    Read the article

  • Can't connect to wi-fi hotspot in Ubuntu 11.10

    - by ht3t
    I'm new to Ubuntu. I'm having a wireless network problem in Ubuntu 11.10. I made a hotspot using Connectify from a computer which is running Windows 7. I can access it in Windows 7 but not in Ubuntu 11.10. Every time I access it,I get a message "disconnected". I'm using msi fx 400 notebook with Intel Centrino wireless -N 1000 wireless card. Ubuntu version is 11.10 with KDE desktop. $ sudo lshw -c network [sudo] password for ht3t: *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:06:00.0 logical name: wlan0 version: 00 serial: 00:26:c7:56:b8:f0 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-12-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:44 memory:e7400000-e7401fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 06 serial: 40:61:86:b6:b1:a2 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw IP=192.168.21.107 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:9000(size=256) memory:e6004000-e6004fff memory:e6000000-e6003fff I can't do anything without internet connection. How can I fix this?

    Read the article

  • In Windows 7 is there a way to login from any user account and see the same workspace and be able to use the running programs of another user?

    - by WickedMongoose
    Our group has a number of Test Stands with PCs that are currently being accessed with a single group login. It has been sent from on high that this is not the way to do things for security reasons and we all agree. However. Multiple team members from around the world log into these Test Stands and need to be able to access programs that have been run from what would be different user profiles if we were to no longer have a single common login. Is there a way to have a common workspace such that when different users login, they will be able to see and interact with all running applications as if they were using a common login? Applications that we run link to and monopolize hardware resources connected to the PC and it is time consuming to restart and reload settings every time a new user logs in. Even if the program did not monopolize the hardware many of these programs are resource intensive and require a large portion of each machine's RAM to run, so trying to run the application again when it is already running from multiple user accounts would quickly consume all system resources. Simple Example: I open a chrome browser while logged into our pc. I then logout and another team member remotes in and should be able to see my open browser and be able to interact with it as if he were the one who opened it. Any alternative process flows or solutions from someone who has gone through a similar transition would be appreciated. This is not a request for how to give all users access to the ability to run a program, but it is the request for how to allow all users access to interact with running applications that have been started by other users and need to be interacted with as if the new user started and has control of the application.

    Read the article

  • Building a small server farm

    - by RayQuang
    Hi, I am planning to set up a Tech startup company that will provide web application solutions. Eventually we hope to diversify into different areas such as possibly social media or other services. For now we plan on running a high demand (from 1000 to 10,000 users in the first year) website running the application. This includes a MySQL database backend, email, and development servers. My question is then, what type of server arrangement will work best, that is ti say should i have a small cluster of ultra high power machines (E.G. Top of the range Xeons, with 12GB RAM) or will it be better to have more less powerful servers load balanced? Should I go for 1 - 2 u servers rack mounted ot would it be beter for it just to be tower servers for maintainability? Finally I would also like to know what kind of Internet and router i would need, I currently have 10mbit down and barely 1 mbit up, but soon our area will have a fiber optic connection with international speeds of up to 25 mbit / sec. Thanks in advance, RayQuang UPDATE: sorry I forgot to mention it, the platform that I will be using is PHP with the APC code cache, Probably running Debian.

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >