Search Results

Search found 15129 results on 606 pages for 'orientation changes'.

Page 470/606 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Is domain-transfer inherently safe for downtime when the name servers remain the same?

    - by jlmt
    I've been reading around this topic towards understanding whether there's some or no chance of downtime during an upcoming domain transfer for 15 live and very critical domains. In our case there are three companies involved: CompanyA is the original registrar and DNS host, CompanyB is the new DNS host, and CompanyC is the new registrar. I've already changed the nameservers for all domains to those of CompanyB. We suffered some downtime because CompanyA deleted their hosted DNS for our domains directly after the change, but the changes propagated and we're now able to configure our DNS with CompanyB. From what I understand (please correct where wrong!): There exists an SOA record that points oneofourdomains.com to ns.companyb.com. That record is maintained and authoritatively hosted by the ccTLD registry for the domain (eg. Verisign for .com). CompanyA currently has the ability to change the SOA record because they're the registrar. There exist NS records for oneofourdomains.com, which are also related to the link from domain name to nameserver, are similarly hosted by the ccTLD, and which CompanyA are also able to change while acting as registrar. Neither CompanyB nor CompanyC currently have any control over the SOA or NS records. CompanyA are unable to cause us (DNS) problems during the transfer by dropping service early, because they are not the authoritative source for the SOA and NS records. When we transfer the domains, it's administrative control of the SOA and NS records that will be transferred to CompanyC. As long as we advise CompanyC that the SOA and NS records must not change (as regards pointing to CompanyB's nameservers), there's no need for any kind of DNS change, and therefore no possibility of downtime. Is my understanding of this correct? My fear is that CompanyA will somehow cut us off again, and their support dept hasn't given me much confidence in their understanding of the topic.

    Read the article

  • Are there any other causes of this error that are NOT related to initial setup?

    - by LordScree
    I'm trying to diagnose an issue at a customer site. They are receiving the following error: A network-related or instance-specific error occurred while establishing a connection to SQL Server I've seen this a few times, but only during the initial setup - it's often caused by one of the following: The database server is turned off The network connection between the database server and the application is closed or somehow blocked (e.g. a firewall) The SQL Server instance is not set up to receive remote connections from the application server (e.g. TCP is turned off, remote connections are disabled, or the "SQL Server Browser" service is stopped/disabled) However, if I assume that no configuration changes have been made, I'm trying to postulate on what the reason might be for getting this error at a random point after the initial setup. My initial thought is: SQL Server machine has run out of resources (e.g. RAM) and is unable to accept new requests from the application server Is this a valid theory? What other possible causes are there of this error that are not related to the initial setup of the server / application connection? Or is it simply impossible that this error could occur without a configuration change having been made (either on the SQL Server side, application side, or somewhere in-between (network))? NOTE: I believe this question differs from the plethora of questions related to this error message because the application and server have been talking to each other quite happily until now (most, if not all, other questions seem to relate to initial setup).

    Read the article

  • No access to Windows 2003 admin shares

    - by ARomo
    This is the environment: Several Win 2003 SP 2 servers and several Win XP SP2 & SP3 clients. All in the same LAN. Firewall is disabled everywhere. No recent Windows updates or configuration changes. This is the problem: Since last Thursday, I log on to any other server or workstation as any regular (non-admin) user and I fail to be able to open ADMIN SHARES ONLY (namely \\server1\c$, \\server1\e$ and \\server1\admin$). The error message is: "\server1\c$ is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again." I can, however, open the same shares if I use FQDN or IP address: \\server1.domain.local\c$ \\172.0.0.1\c$ Other shares do not have this issue and I can open them without any issue. Any ideas or suggestion would be truly appreciated. Thank you in advance.

    Read the article

  • Why does lighttpd keep static files in cache, even when modified on disk ?

    - by Pixelastic
    I am using lighttpd to serve static files. I have a bunch of images in a dir that I regularly update. This will change the file content (and filesize) as well as the modification date, but not their filename. When I access the files through http, the updates are not taken into account and lighty serves the old file. I can manually rename the file to something different, then lighttpd will return a 404 error, and if I rename my file back, I will get the correct updated version. Seems like lightty is using some kind of cache mechanism of its own (which is fine) to return static files. Unfortunatly, it seems that this mechanism doesn't update itself when files are modified. I checked through Wireshark, and my browser is really doing a request to the file, this is not a browser caching issue. It returns a 200 OK when requesting it from an empty cache, and a 304 Not Modified otherwise, as expected. But the file is returned with a wrong Last-Modified header that do not reflect the real last modification date. Maybe there is some config directive that I am not aware of ? I would like the files returned by lighty to reflect the changes made on disk directly, or at least being able to invalidate its cache.

    Read the article

  • Is there an Installer Analyser tool that can list what Registry Keys will be created?

    - by EvoGamer
    I can think of 3 ways to achieve my goal: Create a clean VPC, install a given piece of software, and compare the before and after states. Somehow reverse-engineer the installer. Somehow redirect the output of the installer in question so that all registry calls and copy/move file commands are recorded, but not executed. The first option can be done manually, or potentially automated, but I feel it's rather OTT for my needs. The second could cause all sorts of licencing issues, not to mention it may not always return a correct result. Also, without delving into hex editing, I can't think of a way that it would be possible to do manually (some installers - eg Anti-Virus software - may react unfavourably on automated attempts to investigate the installer). The third option shows the most promise, although if the first could be stripped down into a lightweight throwaway environment, it would work pretty much the same way. However, I'm not sure how to do it. So my question is: What tools are available (if any) and/or how could I find out this information manually? I'm not looking to reverse-engineer anything (if I can help it), but I just want to know exactly what changes are being made to my PC by a given piece of software.

    Read the article

  • How do I back up Hyper-V VMs with Windows Server backup on Windows Server 2008 R2?

    - by Chris
    I've searched this site and google, and I CAN find information about how to back up Hyper-V virtual machines by using Windows Server Backup from the Hyper-V host in Windows Server 2008. You have to set up a registry key to enable the Hyper-V VSS writer, and then you can take online backups of your VMs. However, all the information I have found is about a year old, and none of it has been updated for Windows Server 2008 R2. I tried to run the "FixIt" .msi found here: http://support.microsoft.com/kb/958662 ... but it said that it was not applicable to my operating system. So I am thinking either Windows Server 2008 R2 already has its VSS service for Hyper-V enabled, or it still needs to be enabled but the FixIt package doesn't feel comfortable operating on an OS that wasn't RTM at the time. I went ahead and scheduled a windows server backup for 9pm tomorrow. It said it would take 86 GB, which means it MUST be counting those VMs. But will this backup fail? Can anyone confirm whether you have to apply the same registry changes for R2?

    Read the article

  • Korean characters not appearing in Korean Windows XP computer

    - by user13267
    I am using a Korean software (with a partial English interface) in a Korean Version of Windows XP SP 3 However, in parts of the software, even when I change the interface to Korean, Korean letters show up as random characters, as shown here: This is happening at others parts of the software as well, and I am not sure what is the difference between the places where this is happening, and places where this is not happening. For example, a command button where Korean letters are showing up properly is shown below: This software is a video conferencing software and has a chat feature as well. When I type into the chat box, i can see the Korean letters appear properly at my side, but when I press Enter and send the message, it changes into random characters as shown above in the chat box. What could be the issue here? Could it be a missing font in my computer? Since this is a Korean Windows installation I was hoping everything would work properly by default. What can be done here? EDIT 1: I asked some other people who are using this software, and they think that the problem is at my end, and playing around with the Regional and Language Settings might solve the problem. Also, they suggested I install all the language packs related to Korean display. But it looks like all the language packs have been installed, and my location is set to Korea in Regional and Language Settings in Control Panel, and I still have this problem. Also, I have had similar problems with displaying Korean on an English Windows XP computer. This answer suggested some solutions, but I still do not quite understand exactly what I have to do (at that time I had not fixed the problem, as I later on changed the computer). If I follow that answer, what fonts exactly do I need to install?

    Read the article

  • 2 Computers, same network, different outgoing speeds when uploading to internet?

    - by user117339
    I have 2 work machines in my office, a PowerMac G5 and a MacBook Air. Both behind an IPCop firewall. The PowerMac is connected through a gigabit switch, the MacBook Air is connected through a Netgear 802.11g access point that is then plugged into the gigabit switch. There is also a FreeNAS box, both machines are able to read and write files to it at close to their pipe speeds. The main problem is when I am trying to upload files to the internet at large. The G5 is only hitting 0.1 - 0.25 Mbps. The Macbook is able to hit 2-3 Mbps. The setup (G5 / IPCop / Network) has been the same for 5 years. The issues with the internet speed started about 3 months ago. I hadn't tested on the Macbook at this point. I had complained to the ISP, they said their modem needed a firmware update, did that nothing changed. Reset IPCop, turned off squid, etc. No changes. The ISP switched the office over to a better plan with a theoretical 6 Mbps up, still no change. At this point I tried testing the Macbook, and lo and behold there's the speed. But why? I have tried changing out everything, cables, switches, using another ethernet port on the G5, wiping the system, using DHCP, using manual IPs, changing DNS servers, etc. Nothing works. I figured that if there was something horribly wrong with the network, then internally I would find a similar issue, but that is perfect. iperf, ping, etc show no dropped packets and near saturation of the internal network. I'm at a loss as to what the heck is going on. Any ideas would be appreciated! Below are some screenshots of speedtest.net: G5: Macbook Air:

    Read the article

  • How do you enable webcam support in Facebook for Ubuntu 10.04?

    - by Jonathan
    I think I have finally arrived at an insolvable equation: Chromium v.7 + Ubuntu 10.04 + Sun Java 6 + Webcam + Facebook + Flash 10 = non-functional All of those items listed above are potential points of failure in this situation, and any help narrowing them down would be fantastic. I am simply trying to enable webcame support directly through Facebooks website. Forum searches and the usual googling turn up few posts related to this specific equation. Two of the major suggestions include: 1) Installing the Sun (I refuse to say oracle sob)-provided Java implementation instead of the OpenJDK normally installed in Ubuntu. And yes, after installing it, I did update all my default supports to use the sun commands over the openjdk. 2) Somehow enabling Facebook as a permitted site to access my webcam using Flash settings. I have not been able to explore option 2 because I cannot find a way to adjust the Flash settings in chromium 7. Other factors that do not help include the fact that I am pretty sure facebook changes its webcam interface every 10 seconds just to keep troubleshooters and support personnel on their toes. If anyone has a OTP that informs us of the next shift in the app, a leak would be greatly appreciated!

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • Concurrent users with Quickbooks?

    - by airietis
    I work in a company with 3 people who regularly use the same Quickbooks file. However, they work remotely on different networks. I need to implement a solution that allows all three of us to access Quickbooks at the same time remotely (and each make changes at the same time). We have a spare desktop PC that can be utilized as a server. So, my question is: what is the cheapest and most hassle-free solution to solving this problem? I've considered using application cloud hosting, however, it is very expensive ($40 per user a month) and we are on a tight budget. Is it possible to install Quickbooks on my own server, and have them connect to it remotely? If so, what is the best way to accomplish this? Remote desktop protocol? Or is there a built in feature for this with Quickbooks Premier 2013? EDIT: As MDMarra mentioned, I am looking for a solution that offers true simultaneous access. Will using a dedicated server and having users connect to a VPN be a viable solution?

    Read the article

  • How can see what processes makes my server slow?

    - by Steven
    All my websites on my server are extremely slow or not loading at all. Even server admin (Plesk) will not load some times. There's been no changes to the sites for the last coupple of months. How can I see what processes is making my server slow? My environment looks like this: Server: VPS running Linux 2.8.x OS: Centos 5 Manage interface: Plesk 9.x Memmory: 1024MB CPU: 2.2GHz My websites run on PHP and MySQL. I finally managed to telnet (Putty + SSH) in to my server. Running top did not show any processes using more than max 2% CPU and none were using exesive memmory. I also got a friend to install a program that checks the core files, and all seemed fine. So I'm leaning towards network issues or some other server malfunction. But I'm not able to find out what can be wrong. Here are some answers to Sean Kimball: I don't run mail services on my server yet There are noe specific bandwidth peaks. Prefork looks like this <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> Not sure what you mean with DNS question. But I think it's up and running. There are no processes running wild Where can I find avarage load? Telnet is disabled and I have to log in using SSH :)

    Read the article

  • Mod_pagespeed, Varnish and Apache cache issues after new code pushes

    - by WerkkreW
    I have a rather strange issue. In my environment we are running a load balanced cluster of 8 apache servers with a master-master MySQL backend. In front of apache we have Varnish in the cache layer. We have been running Apache mod_pagespeed for several weeks now and for the most part it has been working great. The issue arises when we do fresh code updates from Git, and and/all of the JS/CSS assets change. Basically the problem appears to be two fold. One, after the code push we generally take the opportunity to flush varnish, restart apache, and restart varnish. In doing this all of the mod_pagespeed combinied/minified files are cleared out ensuring that all of the new JS/CSS assets are fresh. The problem is, upon doing this the file names that mod_pagespeed creates change, but the old files (appear) to be still cached for many people client side leading to very unexpected results. However, if we do not restart apache, the changes to the files may or may not appear client side due to the cached minified assets. The simple solution is to disable mod_pagespeed, however I would rather not do that as it has made a fairly large impact in performance. I feel as if there must be a better way to deal with the inconsistencies in cache between the client and server to prevent having people to go to great lengths or perform a large number of page refreshes to see a working page. I can provide configuration snippets if anyone needs them. If you would like to inspect the site, source, headers, or anything try the following addresses: http://wellplayed.org http://wellplayed.org/tv Thanks in advance!

    Read the article

  • Disable all the idiot-checking in Mac OS X

    - by Fake Name
    I am a Windows/Linux user, who is learning Mac OS X out of interest in doing dev-work for the iPad which I recently purchased. However, OS X is driving me nuts by trying to protect all it's system files, hiding all of the important OS components I want to tweak, and generally making it impossible to do any modification to the OS in general to make it more usable. Therefore, is there a way to turn off all the idiot-checking in Finder? On XP, I can disable "Hide Protected Operating system files" and set "Show Hidden Files". On linux, there really aren't many hidden files, and changing the configuration for .files is easy enough in Gnome and XFCE. How can I set up OS X in a similar way. I am not new to computers, and I am fully aware that deleting system files can damage or even irreparably disable a OS install. Therefore, If I intentionally try to delete a file, or move something, it's probably intentional, and I am willing to accept the consequences in any case. At this point, I have fallen back to doing everything through the command line (which takes forever), because Finder is practically unusable. (As for what I am attempting to do, I also asked about GUI changes here.)

    Read the article

  • How can I disable flashing icons on Windows 7 taskbar?

    - by Jebego
    I set my Windows 7 taskbar to auto-hide. However, sometimes when a program changes or something new happens in a program, the taskbar will show its self, and its respective taskbar icon will begin flashing orange. Here's what I'm talking about: To make the taskbar hide again, I have click on the program before I can go back to what I was doing. Anyways, I personally find this very annoying, and would love to find a way to either: Prevent the taskbar from having such alerts. Prevent the taskbar from showing its self when it has such alerts. I've searched around quite a bit, and really only found answers to this for XP. I've also found another Stack Exchange Question looking for the same thing for Windows 7. However, none of the answers to the question were really what I'm looking for. I'm not looking to hide the taskbar, or control the number of flashes. However, this answer seems to be what I'm looking for, so I downloaded and tried out the program. It works perfectly, other than the fact that the start menu icon is always shown, regardless of the taskbar being set to auto-hide. So, any ideas on how to fix this problem?

    Read the article

  • Distributing Microsoft Office Template or Macro over the network

    - by zfranciscus
    We have around 400 users who use Word and we want to make their life easier by distributing templates and macros over the network. The easiest way to do this of course to setup a shared network folder and let them get the appropriate templates and macros. Of course, each user has to know where to copy these files to in their local PC, and we have to rely on constant email communication to let them know for newer version of the macro and templates. The next alternative is to ask them to configure Word to point to these network folder. But of course any disruption to the network means disruption to their work. We are thinking to setup a synchronization mechanism that downloads new templates to their local machine. We are also thinking to make this sync tool to prompt users that it will download new templates - you know just to give them visibility that they are receiving changes. We are wondering what is the best approach that people usually use in their workplaces ? Are there any specific tool that can make this task easier ?

    Read the article

  • Backing up 80G hard drive 1G per day

    - by barrycarter
    I want to securely backup my 80G HD, but doing a complete backup takes forever and slows down my machine, so I want to backup just 1G per day. Details: % First hurdle: on the first day, I want to backup the "first" 1G of the hard drive. Of course, there really is no "first" 1G on a hard drive. % After 80 days, I'll have my whole HD backed up... assuming none of my files ever change, which of course they do. So the backup plan/program must also catch file creation/changes as they come along. % The backups must be consistent, in that I can restore my system by restoring the backups sequentially. In other words, "dd if=/harddrive" probably won't work. % The backups should encrypt file contents AND names, but I don't see this as a major hurdle. % Once the backup has backed up everything (even changed files), it can re-backup the first 1G on my hard drive. Even though this backup is redundant, that's OK, because I always want to be backing up something (eg, if I'm backing up to optical media, the older media might start going corrupt). Is there a magic backup plan/program that does this? In reality, I want to do this for multiple machines with multiple drives each, but think that solving the above will solve the general case.

    Read the article

  • Mercurial confusion - commit / push, backouts

    - by Madmanguruman
    I'm trying to set up a repository on a shared filesystem. I'm using Mercurial 2.1.2 on a Windows-based architecture. I start with an empty folder on the shared filesystem and create a repository in it. After this, I dump in the baseline files, and add them to versioning, then commit the changes. I then clone the repository to my local hard drive. I then make a change in my local repository, commit it, then push back to the shared filesystem repository. The shared repo graph I get in TortoiseHG looks strange (to me). This is the shared repo: This is the local repo: On the shared repo, the working directory always shows up on the top, then the graph goes 'down' to rev. 0 then back 'up' again through various revisions. It looks to me like I have two different branches, even though everything is on the default branch. Also, that 'top' revision always says "* Working Directory * Not a head revision!" I noticed that in my local repository, I don't get that dangling working directory at the top of the list - everything is in one branch. I also noticed that on my local repository, I can back out the tip revision with no problem. On the shared filesystem repository, I cannot, since I get an error ("Cannot backout change on a different branch"). How can this be? Aren't they supposed to be identical to each other? Am I fundamentally doing something wrong?

    Read the article

  • How to divert traffic based on hostname using HAProxy?

    - by Bosky
    I've had some initial success with HAProxy setting up a bunch of app servers listening on various other ports. I now have another webserver listening on one port, and i'd like to what changes to make to my config to flow traffic by hostname as well. The following is the current setup, assuming: my apache webserver is running at examplecom:8001 my bunch of app servers 0.0.0.0:8081, 0.0.0.0:8082 , 0.0.0.0:8083 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 debug #quiet #user haproxy #group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen appservers 0.0.0.0:80 mode http balance roundrobin option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 server inst1 0.0.0.0:8081 cookie server01 check inter 2000 fall 3 server inst2 0.0.0.0:8082 cookie server02 check inter 2000 fall 3 server inst3 0.0.0.0:8083 cookie server01 check inter 2000 fall 3 server inst4 0.0.0.0:8084 cookie server02 check inter 2000 fall 3 capture cookie vgnvisitor= len 32 (any other comments on the ^ setup are welcome.) Now I'd like to continue the same above, but in addition in case - if the hostname is myspecialtopleveldomain<dot>com, then would like to flow traffic to example<dot>com:8001 ~B

    Read the article

  • Best way to restrict access to a folder in Dropbox

    - by Joe S
    I currently run a business with around 10 staff members and we currently use Dropbox Pro 100GB to share all of our files. It works very well and is inexpensive, however, I am taking on a number of new staff and would like to move the more sensitive documents into their own, protected folder. Currently, we all share one Dropbox account, I am aware that Dropbox for teams supports this, but it is far too expensive for us as a small company. I have researched a number of solutions: 1) Set up a new standard Dropbox account just for use by management, which will contain all of the sensitive documents, and join the shared folder of the rest of my team to access the rest of the documents. As i understand it, this is not possible with a free account, as any dropbox shared folder added to your account will use up your quota 2) Set up some sort of TrueCrypt container, and install TrueCrypt on each trusted staff member's machine, and store the documents inside that. Would this be difficult to use? I'd imagine the sync-ing would not work so well as the disk would technically be mounted at the time of use and any changes would be a change to the actual container rather than individual files. I was just wondering if anyone knows a way to do this without the drawbacks outlined above? Thanks!

    Read the article

  • File exists but is unreadable by PHP

    - by Aron
    More than once I have ran into this issue: I have a cache file that is automatically generated by PHP. It contains some generated PHP code. However for some reason the file cannot be read and parsed by PHP. These are the symptoms: File actually exists on file system. Using Terminal you can navigate to the file, view its contents (which are fully intact), etcetc. PHP file_exists() will report that the file exists...which is correct since it does :) Then I include() the file. But when actually parsing the file, PHP will just consider it an empty file. No fatal error, just no PHP code actually executed. Again, its as if the file was completely empty (which I assure you, it is not)... It is not a permissions issue. Permissions are set as needed. Workaround: open the file in Terminal via 'nano' or some other text editor and just save it to the disk again. After that (despite no changes to the content) PHP will run it just fine... As a clarification, I'd like to add that this happens rarely, but frequently enough to be a problem. And even when it does, there are hundreds of other similar files on the same system that work without a problem... If this were an issue affecting only my own scripts, I would consider that there must be a bug in the way I generate the PHP code. But no, the issue has occurred more than once when deploying to a server (usually from Beanstalk repository via FTP). The issue has been present on various servers, Debian and Ubuntu running Zend Community Server. Any ideas? One that crossed my mind was opcode cache-ing (part of Zend Server CE)...could it be that an empty version of the file is cached if it is requested while the write operation is still in progress?

    Read the article

  • How to set up multi users on dev server with git and github

    - by Derek Organ
    I'm working on lamp application. We have 2 servers (Debian) Live and Dev. I constantly work on dev main to add new features and fix bugs. When happy all works well I scp the relevant code to the Live system. Database (mysql) is local to each machine. Now this is pretty basic setup really and I want to improve the workflow a bit. I use git and github for version control. Admittedly I've only really used one branch. Their can be 3 different developers who work on the code at different times. We all use the same linux username to connect to the dev server and edit the code directly when needed. I usually then commit and push the code at the end of the day to github. One thing to bare in mind is it isn't easy to run this code on a local machine as there are many apache and subdomain configurations that wouldn't work on a local machine so it is important to work on the dev server not locally. I need to create a new process because we need to have a main trunk now and a branch with a big code re-write. What is the best way to do this. Should I create different unix logins for each developer and set up different working areas on the dev server for there changes? e.g. /var/www/mysite_derek /var/www/mysite_paul /var/www/mysite_mike my thinking is they can do a pull from the main branch and then create there own branch and merge it back in. I'm not sure how this will work though with git locally and with github. will i need to create different github user accounts as well. I'd like to do this the 'right' way and future proof for having lots of potential developers but I also don't want to over complicate it. I simple and elegant solution is preferred. any recommendations or suggestions?

    Read the article

  • Can't get 1440x900 resolution with GRUB2 although vbeinfo says it's available

    - by TomSW
    I'm trying to use GRUB2 in graphical mode with 1440x900 resolution, but the result is always garbled nonsense: the highest resolution I can get is 1280x800. Word is from googling that long as vbeinfo lists a resolution, GRUB2 can use it. This doesn't seem to be true: vbeinfo says that 1440x900 is available but it doesn't work. Testing it from the GRUB2 command line: set gxfmode=1440x900 terminal_output gfxterm # -> garbled nonsense # back to trusty 640x480 terminal_output console The graphics card is an Intel GM965. Once linux boots the framebuffer switches to 1440x900. Added after epheminent's reply and various experiments vbeinfo lists two sets of modes. The first set runs from 0x160 to 0x16b, with resolutions 768x480, 960x600, 1280x800 and 1440x900 Then - after a bunch of text-only modes - the second set, containing resolutions 1024x768, 800x600, and 640x480 The first set of modes aren't altered by 915resolution. They all work except 1440x900. The resolution of modes in the second set can be altered using the 915resolution module / command available in GRUB2 = 1.99. # in /boot/grub/grub.cfg insmod 915resolution # 30, 32, 34 all work for me: all that varies is which modes are altered 915resolution 30 1440 900 # setting an impossible resolution changes the mode to "text-only" # in my case 1280x1024 is not supported 915resolution 30 1280 1024 Clearly, 1440x900 should just work: adding it with 915resolution is just a workaround.

    Read the article

  • lamp -- edit PHP file but doesn't change web output -- including die()

    - by Reid W
    Server is standard Linux server on Amazon Web Services. Cent OS 5/Apache/PHP 5.3. No APC. It's worked fine for over a year, but now when I edit some but not all PHP files on the server using vi, the changes don't affect the web output. For example, I edit myfile.php and put a die() at the top, but when I load the page in my web browser, instead of the die() I see the content that would show up if the die() weren't there. svn updating the file in question doesn't help either. Files are on an Amazon EBS partition symlinked to /var/www/html. Just to reiterate -- this has worked fine for a long time. Restarting apache didn't help, nor did rebooting the server. What's weird is that it's just some of the files but not all. File ownership/permissions are the same for the "good" and "problem" files. I'm not a Linux newbie but am at a complete loss with this, and couldn't find anything on Google either. Any hints would be much appreciated!

    Read the article

  • Timestamp Updating Constantly on /dev/null

    - by motorleague
    I've been working on a problem with a /dev/null file on an AIX system (just for background it looks as though it was inadvertently deleted and recreated as a normal file by somebody), but in trying to determine what caused the problem, I noticed that the timestamp on it seems to update every minute. I've observed this on several AIX servers at my workplace. At present I can't entirely rule out this be something specific to the Application being used at my workplace, so I compared with CentOS and Debian based computers at home last night. The CentOS box, which runs 24 hours, had a mod time on /dev/null of around 4 days ago (during which time it was essentially just being used as a web browser and multimedia player, although it would have had active but essentially unused Apache, MySQL and VMM processes running in the background). The timestamp on /dev/null on the Debian machine, which was a just booted laptop, pretty much reflected the boot time, but I tested redirecting STDIN from, and STDOUT to it, and the modification time was unchanged (I'm not sure 100% sure if directing data to /dev/null constitutes "writing to it" in the way it would a normal file). So my question is essentially, could anybody please offer any advice with regards to what circumstances (permissions changes etc.. aside) might cause the timestamp on /dev/null to update? Thanks very much for any suggestions. Alex.

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >