Search Results

Search found 5545 results on 222 pages for 'future'.

Page 157/222 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Installing Windows 7 over PXE, preferably with domain autojoin

    - by Ivan Vucica
    At an educational non-profit, I've inherited a previously set-up Windows domain that, after the first reinstall of the machines, we ended up not using by simply not joining machines back into the domain. Over last summer, before the annual reinstall for shipping machines to the summer school, I toyed with the idea of installing Windows 7 over network, instead of just imaging the machines. It took a bit longer than I expected to figure out the basics; honestly, I expected that Windows would be more friendly for PXE installation out of the box. What I'm interested in is best practices for installing Windows 7 over PXE with domain autojoin. I'd love it if the whole setup could optionally be hosted on a UNIX based system as well. I've had some success by preparing an ISO using Windows Deployment Kit, and loading the ISO into memory. This was needed since I wanted a menu, and I think I couldn't get PXELINUX to chainload into Windows' bootloader. Unfortunately, I couldn't figure out much about customization of the Windows setup in that timeframe nor could I get Samba to work properly; studying the stuff ended up being too lengthy, especially the portion where I edited a disk image on Windows and copied it outside. WDK didn't make things easier by mounting the disk image into RAM, and writing it in its entirety when done with it, making me a very sad boy. I've recently found a different approach, too, that appears to be closer to Microsoft's original idea for netboot deployment and does not involve ISOs. So my question boils down to the following. What exact approach do you use for netbooting Windows 7 setup? How can Windows 7 setup be best customized to be completely unattended, including installation on specific system partition and not destroying the data partition, creation of passworded admin and default user, choice of MAC-address-based hostname, and joining a domain? As much details as possible for everyone's future reference would be appreciated. WDS isn't a bad choice, but if a Linux-based install can be used, that'd be better.

    Read the article

  • Knowledge and user generated content management system to track files, research, proposals, etc.?

    - by Eshwar
    I'll try keep it short. Here's the scenario: We have employees all over the world performing similar work i.e. research, generating powerpoint slides, word documents, graphics, etc. Many times a lot of this previous work can be reused for another future project. The current arrangement is email and phone calls which as you would agree is quick if you know where to look but otherwise archaic and very very inefficient. So I am looking for software that will allow me to do the following: Tag files e.g. an investor presentation on cellphone usage in kenya would be tagged investor, cellphone, kenya Manage references e.g. if we read something on the internet, should be able to paste that link in some fashion and tag it as above. Preferably cloud based so that it can be accessed by anybody and additionally would be nice (though NOT must) to have access levels (director, manager, everyone) A nice interface that non technically savvy folks can warm up to ;) A desktop app would be handy so that people don't always have to click upload or something A tree based system is inefficient in this case because content is usually linked across branches and also people might not quite agree on one format of a tree. Tagging works around this very nicely. What I have considered so far: Evernote (for its more professional look) Springpad (for its versatility with content) Mendeley (this is a research manager and in some ways ideal, but i fear its limited to PDFs) The goal is that when somebody wants to look for a document, they don't have to ask a colleague, they can just search with keywords and all relevant information shows up. Thanks!

    Read the article

  • Howto disable SSH local port forwarding ?

    - by SCO
    I have a server running Ubuntu and the OpenSSH daemon. Let's call it S1. I use this server from client machines (let's call one of them C1) to do an SSH reverse tunnel by using remote port forwarding, eg : ssh -R 1234:localhost:23 login@S1 On S1, I use the default sshd_config file. From what I can see, anyone having the right credentials {login,pwd} on S1 can log into S1 and either do remote port forwarding and local port forwarding. Such credentials could be a certificate in the future, so in my understanding anyone grabbing the certificate can log into S1 from anywhere else (not necessarily C1) and hence create local port forwardings. To me, allowing local port forwarding is too dangerous, since it allows to create some kind of public proxy. I'm looking for a way tto disable only -L forwardings. I tried the following, but this disables both local and remote forwarding : AllowTcpForwarding No I also tried the following, this will only allow -L to SX:1. It's better than nothing, but still not what I need, which is a "none" option. PermitOpen SX:1 So I'm wondering if there is a way, so that I can forbid all local port forwards to write something like : PermitOpen none:none Is the following a nice idea ? PermitOpen localhost:1

    Read the article

  • xauth, ssh and missing home directory

    - by flolo
    We have several servers, and normaly everything works fine, except now... we get a new aircondition installed. This takes 36 hours and for this time almost all servers got shutdown, only 2 remaining servers run for the most important tasks (i.e. accepting incoming email, delivering some important websites, login-server). Everybody was informed that when they need appropiate data from the homedirs they should fetch it before take down. Long story short: Someone realized that he have run a certain program on one of the servers. No Problem, he can remote login into our login server and run the programm there without home directory (binaries are local and necessary information can be copied to the /tmp). That works like a charm until... ... the user needs to run a GUI programm. I find no easy way to make it running, usually ssh -Y honk@loginserver is enough but now the homedirectory is missing and ssh is not able to copy the cookies into ~/.Xauthority (as the file server with the home directories is down). Paranoid as all systemadmins all X-Server just listen locally not on tcp ports, so no remote X connection possible SSH config is waterproof - i.e. no way to set environment variables. My Problem is, that the generated proxy MIT cookie from ssh get lost as the .Xauthority doesnt exist. If I could retrieve it somehow I could reenter it a .Xauthority in /tmp. The only other option (besides changing the config) which came to my mind is, makeing a tunnel (netcat, or better ssh) from the remote host to the loginserver and copy the cookie manually (not sure if it the tcp-unix domain socket stuff works as expected). Any good suggestions (for the future - now our servers are already up)?

    Read the article

  • Enlarging everything on 16" 1920x1080 notebook display in Windows 7

    - by Rob
    Does Windows 7 have an option to enlarge everything on the screen, e.g. via a spi setting? How well does this work? i.e. do the objects look clear when enlarged? I ask this because sometimes software that enlarges non-photographic bitmap images e.g. icons and symbols can leave them artifacted with harsh jaggier slopes and blurred lines. I've tried the dpi setting in Windows XP but it doesn't enlarge everything, and some things are not so clear as described above. I'm looking at a notebook/laptop with this spec. I've already enjoyed using a 15.4" 1920x1200 display for 5 years. I've tried the dpi setting in Windows XP but it doesn't enlarge everything, and some things are not so clear as described above. I am buying a laptop for my father who will probably prefer larger objects on the screen, although I want to provide some future proofing by allowing more on the screen if needed. I'm not interested in answers that debate the effectiveness or otherwise merits of 1920x1080 on a 16" display, please. The alternative option of 1366x768 seems too little.

    Read the article

  • Quickly set up a Windows Server and automatically install and configure software

    - by Chris
    Yesterday I spent far too much time downloading and installing software on Windows Server 2008. I only had to install a simple server for SQL Server 2008 Express using Microsoft's Web Platform Installer, then configure it to enable remote connections. Everything had to be attended, wasting my time. On a Linux system, this would be trivial to automate, but this is Windows. I do this very rarely, but in the future I would like to make this take as little time as possible. I could do a disc image with everything I installed and configured, but is there a better way? I know nothing of advanced deployment techniques on Windows. Ideally I would like to be able to remotely re-install the OS, or have an unattended install (which I know is possible). Any tips to make the software I need easier to and install and configure with minimal interaction necessary would be helpful. I don't expect everything I asked for to be possible and easy to do. Basically, If any part of it can be done quicker or at least without user input, that's what I'm looking for.

    Read the article

  • Windows 7 & Photoshop CS5.1 - "Fonts missing" issue - I have the font!! (sort of)

    - by Tigue Von Bond
    I've noticed a really aggravating issue with Adobe Photoshop CS5.1 on at least two occasions. I downloaded a layered PSD file to work with, in the release notes it directed me to a download page for all of the font used, which was Futura Medium Condensed. I chcked and did not have any Futura fonts at all. So I downloaded and installed the font from the source provided by the provider of the PSD. I closed and reopened Photoshop and when I open the PSD file I get an error saying: Some text layers contain fonts that are missing. These layers will need to have the missing fonts replaced before they can be used for vector based output. I then go to edit the text layer and receive: The following fonts are missing for text layer "discount" Future CondensedExtraBold Font substitution will occur. Continue? If I click OK, it substitutes Myriad Pro for this layer. Didn't I download the right font? I go into the font dropdown and see I have a font with a slightly different name "Futura-CondensedExtraBold-Th Regular" I have also seen this issue with Helvetica. I have received a PSD file, same "some text layers contain fonts that are missing These..." error dialog when I open up the file - and when I go to edit a layer with text I get: The following fonts are missing for text layer "Home": Helvetica Font substitution will occur. Continue? I click continue - it substitutes Myriad Pro - and check my font list and sure enough I have a bunch of Helvetica fonts, none exactly named "Helvetica" Is this a common issue? Googling it yielded a few people with similar problems (I think all on Macs) but either no concrete help or no response. Is it that the two font names aren't EXACT matches? If that is the case is there any way of setting up Photoshop to more intelligently substitute or even set up some sort of mapping (if "Helvetica" then substitute "Helvetica Lt Std" ? Is there anything else, maybe something that I am not thinking of?

    Read the article

  • What are the "least legally restrictive" well-connected countries to host a website?

    - by monster
    NB: I am aware that this question is subjective, as it can't be defined precisely, but the answers should still be "objective": Country name, and what makes it legally safer. EDIT: A) I am located in Germany. B) I am NOT looking for a place to offer pirated Software/Media; no binary on my site, except "profile icon". Hello! I want to start publishing "social" websites / apps, and I found that the biggest initial problem is this: Any and all services I have to depend on, including Domain Registrar, DNS provider, Server/Cloud Provider, CDN Provider, ... even my Insurance Agent, basically say that they can "throw me out" if my website contains "unacceptable" content. It's always phrased in such a way that basically anything can fall under "unacceptable" content. This is very frustrating because you just can't fully control what users post on your "social website", and you so you basically have to expect when you go to bed that your site is going to be gone when you wake up. I've heard a lot of horror stories about this. Since the "Terms Of Service" of all those providers are foremost to protect themselves from legal actions, and those legal actions depend on the country where they are located, it seems like the first step is to find which country is the "safest" to locate a site. "Safest" being defined as, where I am least likely to get in legal trouble with the local authorities, if some user posts something unacceptable in some way. The main restriction is that it should also be a "well-connected" country, because there is no point in being "safe", if my users can't get to my sites, or the latency is unacceptable. I am targeting the English speaking people in any country as my future users.

    Read the article

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • 500 Internal Server Error when setting up Apache on localhost

    - by Martin Hoe
    I downloaded and installed XAMPP, and to keep my projects nicely separated I want to create a VirtualHost for each one based on its future domain name. For example, in my first project (we'll say it's project.com) I've put this in my Apache configuration: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/sub/ ServerName sub.project.com ServerAdmin [email protected] </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/project/ ServerName project.com ServerAdmin [email protected] </VirtualHost> And this in my hosts file: # development 127.0.0.1 localhost 127.0.0.1 project.org 127.0.0.1 sub.project.org When I go to project.com in my browser, the project loads up successfully. Same if I go to sub.project.com. But, if I navigate to: http://project.com/register (one of my site pages) I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The error log shows this: [Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Any idea what config items I got wrong or how to get this working? It happens on any page that's not in in the root directory of project.com. Thanks.

    Read the article

  • How do I log back into a Windows Server 2003 guest OS after Hyper-V integration services installs and breaks my domain logins?

    - by Warren P
    After installing Hyper-V integration services, I appear to have a problem with logging in to my Windows Server 2003 virtual machine. Incorrect passwords and logins give the usual error message, but a correct login/password gives me this message: Windows cannot connect to the domain, either because the domain controller is down or otherwise unavailable, or because your computer account was not found. Please try again later. If this message continues to appear, contact your system administrator for assistance. Nothing pleases me more than Microsoft telling me (the ersatz system administrator) to contact my system administrator for help, when I suspect that I'm hooped. The virtual machine has a valid network connection, and has decided to invalidate all my previous logins on this account, so I can't log in and remotely fix anything, and I can't remotely connect to it from outside either. This appears to be a catch 22. Unfortunately I don't know any non-domain local logins for this virtual machine, so I suspect I am basically hooped, or that I need ophcrack. is there any alternative to ophcrack? Second and related question; I used Disk2VHD to do the conversion, and I could log in fine several times, until after the Hyper-V integration services were installed, then suddenly this happens and I can't log in now - was there something I did wrong? I can't get networking working inside the VM BEFORE I install integration services, and at the very moment that integration services is being installed, I'm getting locked out like this. I probably should always know the local login of any machine I'm upgrading so I don't get stuck like this in the future.... great. Now I am reminded again of this.

    Read the article

  • ubuntu 9.04 pptp broken after a power failure

    - by kevin42
    I have a small Ubuntu 9.04 router setup as a NAT box and a PPTP server. After a power failure everything except the PPTP server still works. A windows client gets to "registering your computer on the network" but then says Error 742: The remote computer does not support the required data encryption type. I did some research and I think the problem is with the ppp_mppe module. When I try to run 'modprobe ppp_mppe' it hangs indefinitely. What would cause this hang? Any ideas how I can troubleshoot this further? Thanks for the help! UPDATE: I am still having the problem, however I have found some more information. When the first user tries to connect to pptp, the process list shows modprobe sha1 running, and one instance of modprobe ppp_mppe for each connection attempt. If I killall modprobe at this point the next connection attempt works, and everything is fine until the next reboot. I'm planning to do a clean install at some point in the future but I'd really like to get to the real cause of this.

    Read the article

  • organizing my music and my itunes

    - by Cawas
    What can we do to organize our music? I've got over 20k items on my iTunes Library, at least 5k with ratings and play counts, apparently just 12k music files and I can't understand how this question have not been properly answered yet. Maybe there is no answer. I have too many duplicates, broken links, bad music, corrupted files... Well, a big mess with no tags! Probably there's no single software capable of just organizing everything, though I'd love one. Hopefully some time in the near future we all will be able to just sync the cloud of our automagically selected music to the newly created offline copy. But meanwhile... Please, do consider I've at least gave a shot (even while not a full test drive) to every single answer linked here already, plus a few more. I'm fine with using other software (mac too, please) to organize, but I'd need it to sync (retrieve and put back) at least iTunes ratings, because of iPhone and smart playlists. Not looking for iTunes replacement. I'm hoping to hear what you hardcore music organizers out there are using as your own solutions! :) I myself am using way too many tools, getting way too little done and end up going song by song.

    Read the article

  • Advice: USB Monitoring Programming

    - by Kashif
    I need an advice about USB programming in linux. i have to design a USB monitoring program that 'll keep checking usb ports of a linux cent os. as soon as a usb or external hard disk is connected, this program will shoot an email to some specific person about detail of usb (as size, mount on, time). when usb is disconnected, it will again shoot an email to some person with same kind of information. mean while this program will also write logs in syslog/messages with name of programing for easy tracking. Now I want ask that what is best way to develop this program. as I'm new to this field so i know nothing about it? either i should use perl, bash scripting or some other language? I have no idea what is right way to adopt coz this program will keep running all the time to keep a check on usb ports. I know few commands in like lsusb, fdisk (to check attached usb) and df -h (to get detail of usb) but dont know how i can achieve using these commands that i am thinking. also one more thing that in future i also need to modify this program for ubuntu and Citrix XenServer and it should be same everywhere.

    Read the article

  • How do I run multiple MVC apps within a subdomain on IIS7?

    - by Matthew Patrick Cashatt
    Hello and thanks for looking. Background I am currently wrapping up a development contract and the client would like for me to push a build of the application to their IIS 7-based server in which they would like to run multiple MVC apps. One of the issues I have off of the bat is that this server is already a subdomain on their larger network. So, if I enter SERVERNAME in my browser, it automatically directs to SERVERNAME.COMPANYNAME.COM. Now, this is just fine if I place my application in the default website/root. In this scenario, clicking a link that requests admin.html directs to `SERVERNAME.COMPANYNAME.COM/admin.html' as usual. BUT they want me to place the app in a subdomain on this server so that they can also run other apps on the same server. So I assume that I need MYAPP.SERVERNAME.COMPANYNAME.COM but I have no idea how to do that. Complicating matters is that my app and the future ones they wish to install are all MVC based which intercepts and re-writes URLs. I assume that this takes care of itself if I can just successfully get my app into a subdomain to begin with. What I have tried Creating a new site on the server in it's own app pool Setting the binding for that site to MYAPP.SERVERNAME.COMPANYNAME.COM Setting the binding for that site to MYAPP Setting the binding for that site to MYAPP.SERVERNAME Setting the binding for that site to MYAPP.SERVERNAME.COM Setting the binding for that site to MYAPP.COMPANYNAME.COM Nothing is working. Am I missing something simple here? Thanks, Matt

    Read the article

  • best practices for setting up a new windows 2008 R2 server with ec2 AWS

    - by Alex
    Can someone comment what they would add to the following list of SOP in terms of best practices? This is being set up on AWS, and then after further testing, back in our datacenter. Standard Operation Procedure (SOP): Installation Part: 2 - Installation of Software Components in Windows 2008 R2 (Updated). Step: 1 Logon to the host through Remote Desktop. Strp: 2 Open Server Manager - Server Roles - Install Web Server IIS 7.5 with compatible of IIS 6 features and Management compatibility mode. Step: 3 Open IE/Mozilla to Download the below listed software's and save all installation files to folder called "AWS Server Install Files" for future reference.. Net Framework 2.0 (Download that from internet) Crystal reports for .Net Framework 2.0 (x64) (Download that from internet) SQL Server 2005 (AWS Image) Step: 4 Once all software's saved on local drive, then Install it one by one. Step: 5 Navigate to Desktop folder to install the below listed softwares. Microsoft Asp.net 2.0 AjaxExtention 1.0 (placed on Desktop \Softwares) WebEx recorder. (placed on Desktop \Softwares) Winrar(placed on Desktop \Softwares) Step: 6 Make sure all the software are working fine. Step: 7 Inspect the server once entirely. Step: 8 Logoff & Stop the Instance.

    Read the article

  • Home Server: storage virtualisation, what to choose?

    - by Huygens
    I'm looking for virtualisation solutions for storage and OS for a home server. A sort of private cloud where I manage the storage space independently of the VM one. This question focus on storage management. (I have another question related to the VM/compute instance management). Here my environement and wishes. Server: HP Proliant MicroServer with 8 GB RAM (AMD Turion dual core with AMD-V technology) with 1 250GB system disk and up to 4 HDD (2 TB) for "data" OS types: only Linux (perhaps a *BSD VM in the future) Linux distributions do not matter, I'm familiar with RHEL, Fedora, Suse, Ubuntu, but any other recommandation will be fine The 4 HDD is going to be a software RAID array, probably RAID 5. storage should be "virtualised/cloudified": easy to extend: if I add a NAS on the network, I can include the NAS space capacity within this storage space as one virtual disk. This can be a NAS, an external HDD or another server. cluster FS or S3 style space or OpenStack block storage? Whatever is easier to manage/maintain and easy to integrate/plug to VM/compute instance. I would prefer free (libre, as in a free speach) and open source tools. But it does not have to be free as in a free beer. Note: the VMs I intend to run on top of this server are one dedicated to backup, one for a "owncloud/dropbox"-like service and perhaps one for media server (hosting video and photos). I'm not sure if traditional VMs or compute instance are the most suitable for this.

    Read the article

  • ffmpeg cutting video duration

    - by Steve Spence
    When using ffmpeg on linux, my 4.3GB 2.21 second video is being chopped down to 1.56 duration. I'm trying to reduce file size, but not lose frames. steve@steve-OptiPlex-170L:~/Desktop$ ffmpeg -i microbe.avi microbe.mp4 ffmpeg version 0.8.3-4:0.8.3-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers built on Jun 12 2012 16:37:58 with gcc 4.6.3 * THIS PROGRAM IS DEPRECATED * This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. Input #0, avi, from 'microbe.avi': Duration: 00:02:21.80, start: 0.000000, bitrate: 242311 kb/s Stream #0.0: Video: rawvideo, bgr24, 1280x960, 10 tbr, 10 tbn, 10 tbc Incompatible pixel format 'bgr24' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x9f861e0] w:1280 h:960 pixfmt:bgr24 [avsink @ 0x9f86440] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x9f7d800] w:1280 h:960 fmt:bgr24 - w:1280 h:960 fmt:yuv420p flags:0x4 Output #0, mp4, to 'microbe.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 1280x960, q=2-31, 200 kb/s, 10 tbn, 10 tbc Stream mapping: Stream #0.0 - #0.0 Press ctrl-c to stop encoding frame= 1164 fps= 6 q=31.0 Lsize= 3775kB time=116.40 bitrate= 265.7kbits/s video:3765kB audio:0kB global headers:0kB muxing overhead 0.272870% steve@steve-OptiPlex-170L:~/Desktop$

    Read the article

  • What methods are available for updating a non-Internet-connected VMWare ESXi host?

    - by romandas
    I have a stand-alone installation of VMWare vSphere Essentials, with a vCenter Server and 3 ESXi 4.0 host servers. The environment is intended to remain as a stand-alone network, with the exception that I can "float" a workstation or server between the 'Net and the VMWare network for patches and maintenance. With other installations, where the Internet is available, I've used the vSphere Host Update utility to connect to VMWare and then apply the patches to the ESXi hosts. My problem is that this utility does not seem to function if it cannot connect to both VMWare and the ESXi host at the same time, as the scan for patches function will not scan the server without connecting to VMWare's site to sync its repository first. Even if I sync it, disconnect from the 'Net and connect to the VMWare network, it still won't scan hosts for required patches -- it will prompt for syncing with VMWare and if you click No to syncing, the scan does not occur. Does anyone know of other options for updating the ESXi hosts in some automated fashion? I believe I can manually pull down required patches and apply them, but this will not scale well, and in the future I'm sure I'll want something a bit more scalable.

    Read the article

  • Games, Windows 8.1 and 144Hz display

    - by Marioysikax
    So I have been having problems with few games after switching from 7 to 8.1, which seems to be related to my 144Hz monitor. Few examples: Shank, Shank 2, Blood of the Werewolf, Astebreed, the Sims 2 and Rayman Legends patch 1.2Had few other as well but it's been long and I have 600+ steam library. From those games at least the Sims 2 and Shank worked without any problems with same setup and Windows 7. So basically these games simply refuse to launch with basic setup. However if I plug 60Hz TV with HDMI instead everything magically starts to work. As for Astebreed and the Sims 2 using windowed mode seems to also work. As for Rayman Demo and version 1.0 works for some reason and 1.2 breaks settings menu. I have already tried contacting supports. EA support stated game simply shouldn't work with 8.1 at all (which is lie as it works with that TV and friend with 8 plays just fine), ubisoft support took few weeks and support said he will forward info for further processing, blood of the werewolf support had no idea what's going on and told me to just use my TV instead.Changing monitors refresh rate to 120Hz or forcing it to 60Hz doesn't do anything. I have DVI right now but I will try with DisplayPort when I get the cable. At PCGW Garrett said it may have something to do how listing resolutions work with 8 compared to earlier Windows versions but my googling skills don't bring anything up and compatibility mode for earlier windows version doesn't work either (not that I expected that to work). My system specs are on my steam profile. How do I get those work with my 144Hz monitor as well as possible future games having same problem? Downgrading to 7 would work but is far from practical and I don't own legit lisence for that one.

    Read the article

  • Could hybrid SSD + HDD be made with fixed internal partitions?

    - by Aaron
    I was pretty close to getting Seagate's Momentus XT but have been scared off by the many problems reported on forums and feedback sites, especially in Mac Book Pros. So I'm waiting for mk 2 with some extra flash and better reliablilty I'm assuming will come out this year. What would suit me better though is a 32+500 hybrid drive where I have more control over what is on the flash drive and what is on the disk drive. So there are 2 physical partitions within the one 2.5" hard drive enclosure which use different media internally (32GB for core files and 500GB for data and multimedia). The partitions would be locked so they can't be changed. - Or even better, the disk driver just makes them appear as two disks to the OS that share the same bus... Perhaps it's ok if the bios just sees the first drive until the OS is loaded. Is either of it technically possible? Obviously difficult to market outside of the enthusiast market. The SSD memory modules can be pretty small right, so they could even make them a card that plugs into a secondary connection on the enclosure. That would be good for computer builders as well as for upgrading and recoverability. Then future operating systems could recognise these system SSD drives and automatically install the OS + swap files on it. While placing document libraries on the larger data drive. While in the longer term HDD will probably disapear there will always be a trade off between speed, storage size and expense.

    Read the article

  • Need a place to store a few bytes of meta information on storage media

    - by Jason C
    I'm working on an embedded project. I need a place to store some filesystem-independent meta information on a storage device. The device has an MSDOS partition table. The device also may have unallocated space (depending on its size) but it will be TRIMmed (and also may be blown away by new partitions in the future). I need a location on the device that is not unallocated and that has a low risk of being touched (outside of completely erasing the device). The device is only guaranteed to have an MBR at the point the meta data needs to first be written; meaning there are no EBRs/VBRs present that I could use. There are 446 bytes at the very start of the device available for MBR bootstrap code. Currently my only idea is to store data at the end of this block. However, the device is bootable and I have no way of knowing if I'd be blowing away bootstrap code or not. The sector size is 512 bytes and the MBR is the first sector, I'm pretty sure (correct me if I'm wrong) that that means the second sector is available for use by partition data, so I can't use that either. Does anybody have any ideas? I need 4 bytes of space.

    Read the article

  • Windows Vista/7 dropping Mac Server share points

    - by Hooligancat
    My Windows Vista and Windows 7 clients are having problems maintaining access to SMB shares on a Mac server. The initial connection to the server appears to be OK, as the Windows clients can see all of the server share points. However, the client randomly drops a couple of the server share points although the clients can still see the server. For example. If I have the following share points on the Mac server: Share A Share B Share C Share D Share E The Windows client can see these shares most of the time and can access them most of the time. But randomly a couple of the shares will just get dropped or go missing from the Windows client's ability to view them so I end up with something like: Share B Share D Share E All the share points are established int the same way with the same permission settings. My Mac OSX Server is set up with the following for SMB: SMB sharing enabled Standalone Server Workgroup of `CORPORATE` Allow Guest Access = YES Client connections limit = 100 Authentication: NTLMv2 & Kerberos and NTLM Code Page is Latin US (437) This is a workgroup master browser WINS registration is set to Enable WINS server (tried with setting off) Enable virtual share points for homes YES I noticed in my SMB file service log that the clients appear to connect OK, but I get the following error which implies a reset by either the server or the client: /SourceCache/samba/samba-187.9/samba/source/lib/util_sock.c:read_data(534) read_data: read failure for 4 bytes to client 192.168.0.99. = Connection reset by peer I am a bit stumped as to a direction to turn to try and get this to resolve. Continued attempts to access the server from the client will reconnect to the share points, but they inevitably get dropped again in the near future. Any and all help much appreciated.

    Read the article

  • Need to restore emails that Outlook deleted from the mail server

    - by Mugen
    I wanted to use POP mail for my Yahoo mail account. So I went to the Yahoo mail settings and enabled POP feature. I installed Microsoft Outlook 2007 and added the settings for my Yahoo mail account. Then I performed a sync. I successfully received all the emails from Yahoo account into my pc. But when I logged into my Yahoo mail online, to my horror all the mails were deleted. It seems someone in Microsoft for some reason decided to keep the default setting unchecked for "Leave copy of messages on the server". This has been a very old account and I need to have all the mails also present on the mail server so I can access it anytime and future purposes. Is there anyway I can restore these emails on the server again? Any ideas how I can get it back to the way it was? I have been googling this for quite sometime now but I'm not able to find it. Any helpful suggestions are welcome. Thanks.

    Read the article

  • Which revision control system for single user

    - by G. Bach
    I'm looking to set up a revision control system with me as a single user. I'd like to have access (read and write) protected using SSL, little overhead, and preferrably a simple setup. I'm looking to do this on my own server, so I don't want to use the option of registering with some professional provider of such a service (I like having direct control over my data; also, I'd like to know how to set up something like that). As far as I'm aware, what kind of project I want to subject to revision control doesn't really matter, but just for completeness' sake, I'm planning on using this for Java project, some html/css/php stuff, and in the future possibly as a synchronizing tool for small data bases (ignore that later one if it doesn't fit in with the paradigm of revision control). My questions primarily arise from the fact that I only ever used Subversion from Eclipse, so I don't have thorough knowledge of what's out there, what fits better for which needs, etc. So far I've heard of Subversion, Git, Mercurial, but I'm open to any system that's widely used and well supported. My server is running Ubuntu 11.10. Which system should I choose, what are the advantages of the respective systems, and if you know of any particularly useful ones, are there tutorials regarding the setup of the system I should choose that you could recommend?

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >