Search Results

Search found 6625 results on 265 pages for 'advice'.

Page 47/265 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • Troubleshooting wireless connection problem / site survey?

    - by johnnyb10
    I just started in the IT department of a small company (200 users) and it's clear that one of the main problems that is driving everyone crazy is the spotty nature of the wireless connectivity throughout the office, particularly in certain conference rooms. This is a huge problem because the connection often drops during important presentations to clients. I was hired to help ease the load on the existing IT admin, who has done a great job, but is overloaded with many other tasks to deal with. So I would like to try to help out with this wireless issue. I am looking for advice on the best way to solve this problem--a realistic troubleshooting methodology that does not require me to spend any money. So far, I've experimented with Ekahau Heat Mapper, which is free and helps create a site survey. But I'm not exactly sure what I'm looking for or if there are other programs/tools/methods I should try as well. Any advice would be greatly appreciated. [Some background: The wireless setup consists of an HP ProCurve Mobility MSM (710?) controller that controls 10 access points throughout the building. There are three virtual wireless networks configured on the controller: one seems to be a default that cannot be changed, one is for internal employees and authenticates via Active Directory, and the third is a guest network for visitors. When I use HeatMapper, these show up as three different SSIDs, with different MAC addresses, all on the same channel. At first I thought maybe this would cause interference, but this seems to be the way the controller works;apparently, it automatically configures the channels to avoid interference from the other APs on the network.]

    Read the article

  • Convert DVD Movie to MPEG and view on PS3 via Windows Media Server 12

    - by Vidar
    I think Apollo spacecraft missions to the moon were easier than this! I have tried dozens of DVD ripping software and media servers and have had limited success in trying to convert all my DVDs into file format so they can be viewed on PS3. I have also been on dozens of forums and it's all getting a bit confusing, some advice is out of date, some software is no longer updated - updates have been applied to PS3 operating system and windows and so on and so on. There has to be a way to get all this knowledge and information in one place that's up to date so people can do the same thing as me. Can anyone give me some definitive software and/or advice to do the following: I have over 200 DVDs - I want to convert these to VOB files (rename to MPEG so WMS can stream them). Store on hard disk and view via Windows Media Server 12 (Windows 7). I will then be able to view these via my PS3 in my lounge and never have to get out another DVD case again. I don't want to encode to any other format like MP4 with H.264 because I will lose some of the original quality. So MPEG-2 is fine for me. Note: I have been using DVD Shrink but it gives odd results sometimes. The main problem being that once the DVD has been ripped - WMS shows the wrong playing length of the film, however if I use VLC Media Player it will play through the whole film OK. This is obviously no good when it comes to streaming on the PS3.

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • Migrate Domain from Server 2008 R2 to Small Business Server 2011

    - by josecortesp
    I'm looking for some advice here, rather than the big how to do it I'm looking for what do to I have this home server, quad core and 4 GB of ram (I really can't afford more right now). With a Windows Serve 2008 R2 With ActiveDirectory and a Hyper-V-Virtual machine with SharePoint, TFS and a couple of more thigs. I have a least 10 remote users, all of them joined a Hamachi VPN (working great by the way). But I want to migrate that to a Small Business Server 2011 Standard. I tried to make a VM to join the domain and then promote that VM, back up it and then format the physical server, boot up the VM, Promote the Phisical and then erase the VM, but I can't do that because of SBS requiring a least 4 GB of ram to install (so I can't give all the 4 GB of physical ram to a VM). I was thinking in using a laptop (All the clients are laptop) as a temporal server, join the domain, promote it, then format the server and install SBS on the server and do all again. I really need some advice. Thanks in advance. BTW, I know that the software I'm using is kindda expensive, and I can't afford more hardware. I have access to MS downloads by a University partnership so I have all this software for free.

    Read the article

  • SBS 2011 Essentials and too many new Mac users

    - by Harry Muscle
    We currently have about 15 users on a Windows SBS 2011 Essentials Server. I've just been informed that we plan to bring aboard about 15 more users that will be using Macs. We'll be using a Mac Server to manage the 15 new Macs, however, I'm looking for advice on how to best set this all up. Ideally I would just add the 15 new Mac users to Active Directory and setup the Mac Server to authenticate against AD, unfortunately the SBS 2011 Essentials Server has a limit of 25 users, so adding these new users to AD won't work unless we upgrade the Windows server (which I'd rather avoid since it's a lot of work and a lot of money). That leaves the option of creating user accounts for these 15 Mac users on the Mac Server only. The problem that this creates though is how do I share files been Mac users and Windows users since they are now using different systems for network authentication. Any advice (short of upgrade to SBS Standard) is highly appreciated. Thanks, Harry P.S. We don't run Exchange or anything else on our server ... it's mainly used for file sharing and enforcing security via group policies.

    Read the article

  • Ubuntu server or Debian server (to run C++ apps developed on Ubuntu)

    - by skyeagle
    I have written a number of C++ server side daemons for my website, using my Ubuntu 9.10 dev machine. The C++ apps I mentioned above are "GUI-less" daemons (and libraries used by the daemons). I am now about to host my website and need to decide whether to go with Debian server or Ubuntu server. In a nutshell, here is the situation: I developed on Ubuntu desktop because I preferred the more friendly GUI I would like to deploy on Debian Server because of the (perceived?) robustness of the Debian server over Ubuntu server (I may be totally wrong here - and in fact, this is really what this question is all about) If Debian server is indeed more robust than Ubuntu server, then I have no choice but to go with Debian server - BUT, will my Ubuntu developed C++ apps run on the server? (or do I need to recompile them on the server? (I'd HATE to have to do this, because I want to keep the server machine clean and light - no GUI, no dev tools etc). This last question is really about binary compatability between Ubuntu and Debian. I want the server to be robust, secure and stable, and simply act as a server (i.e. LAMP and very little else - no GUI etc). Given that requirement, and the fact that I need to run my C++ apps (developed on Ubuntu 9.10), I need advice on which OS to choose for the server. Ideally, any advice will be backed with a reason. I am particularly interested in hearing from people who have been in an identical situation, or done something similar.

    Read the article

  • Backup solution, or, how Duplicati duped me

    - by blarghmaster
    TL/DR version: Mono + Duplicati.commandline.exe restore etc. etc. spits this out for several files regardless of what I try. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Any advice here, or an idea of where to look for a better solution? FULL STORY: Ive recently put together an nice clean, friendly backup solution for several servers, predominantly Linux, but occasionally a windows box is added too. The solution as is meets all my requirements and does it well... save 1: cross-compatibility The solution is based on a combination of several elements, but eventually comes done to using Duplicity and Duplicati for the actual storage of files. The entire solution was ready to go before i realized that Duplicati, does not, in fact allow me to restore my files to a Linux box, regardless of what the commandline under Mono might tell you. It just spits out errors on random zip and image files, for apparently no good reason as i have tried several options to get it to restore, and several versions of Mono including installing it pretty much lib-for-lib. There is no effective log file for the reasons for these errors, and even the "--debug-output=true" flag does nothing. I am able to list sets, list files in said sets, even do a verify, but each time i do a restore of any kind, i get errors to the effect of : Failed to restore file: "snapshot/blahblah/2005-11-07.tar.gz", Error message: The partial file record for snapshot/blahblah/2005-11-07.tar.gz does not match the file Now i could most likely use the friendly instructions on Duplicati's site and script a bash equivalent of the restore, but that's not exactly ideal. Any advice on this? or possibly an alternative solution that presents the same benefits of Duplicati/Duplicity but that actually works across platforms?

    Read the article

  • Windows Server 2008 (Web Server) Replication

    - by justjoshingyou
    We have a load balanced environment with Windows Server 2008. What are some best practices to setting up replication across the web servers? Do I only want to replicate the web folders? How about replicating IIS changes - or do I need to make IIS changes on every server? I've never, ever set up replication, but I have worked with a web farm that used it before. Basically, I only know the basics about how it works, and am looking for any advice, guides, warnings, etc on setting this up. If you'd like to offer any advice, I'll let you know how our environment is for now. We have 1 prod server up and the second is nearly ready to go. We are using a cloud system and all machines are VM's. I am in the process of setting up the domain controller now (as I need to have one for DFS). Any ideas on the best way to go about setting up replication? Should we just stick the prod server in from the start or set up using a test VM and our second server and then switch it up later? I do not want to risk overwriting our prod server. Thanks!

    Read the article

  • Microsoft Licensing Scenario/Questions [closed]

    - by user17455
    Possible Duplicate: Can you help me with my software licensing question? I am a member of a team developing a third party application (APP) that listens for and services connections from remote devices via TCP. Also, some of these remote devices allow 1 or more users to interact with the remote device. On some of the remote devices, it is impossible for a user to interact with the device. The user/remote device makes no use of any Windows Server service - not DHCP, not IIS, not File Server, not Print Serer, not AD. The remote device's only connection to the Windows Server machine is through the APP's TCP ports. Our company has no interaction with Microsoft. We do not have a Microsoft sales team. Past inquiries have determined that it is cheaper for us to buy Microsoft software (and CALs) retail than to enter into any kind of "arrangement" with Microsoft. I have many questions about SQL Server CALs and Windows Server 2008 CALs. How can I obtain authoritative/legally binding answers? I am not looking for FREE legal advice. I AM looking for FREE advice about who/what/where I can responsibly spend my money to get meaningful information. I fear that passing this on to the local company law firm will just mean that I will be paying them to educate themselves on Microsoft licensing. And if that's like writing code to a new Microsoft API - they are not going to get it right the first time. Going to Microsoft for answers sounds like swimming up to a hungry shark and asking "One leg or two?" I am hoping someone has been down this road before and knows a law firm/lawyer that is experienced in these matters. Any help/suggestion welcome. Thanks.

    Read the article

  • Multi monitor setup with a tv as well?

    - by jasondavis
    I have always had a dual monitor setup for my pc's for years now. I am now in the market to build a new PC and get some new stuff. I need some help or advice on how to run 3 monitors WITH a 4th display which will be a lcd/plasma tv. So I am thinking I can get 2 video cards and that will give me all the hookups to run 3 monitors instead of 2. I will mount all 3 monitors in a row side by side and above them on the wall I would like to mount a larger 30-40+ inch lcd or plasma tv. I would then like to hook up my atelite/cable to this tv just as I would normally hook up a tv but I would like to also be able to have an option to view my PC o this tv as well. I know that is possible but would it be possible to view my PC on that tv and also still view my 3 other monitors and have the TV be a 4th display, where I could dock a different app/window in windows in all 4 displays (3 monitors + tv) ?? Please tell me any tips/advice on how to do this including what cables/software if any/converters/ you name it. Thanks for any help

    Read the article

  • Smart card driven membership and door entry system

    - by Rob G
    I'm looking at putting in a smart card driven system at my local sports club (which doesn't have oodles of money), and since they're willing to pay for hardware, and I'm willing to do the technical setup, I was wondering if anyone had any experience in setting something like this up. Writing any software needed is not the problem, I've pretty much got that covered with various open source projects out there and custom code I'll write, but it's more the hardware side I'm not too sure about and I'm looking for advice from people out there. I'm sure there are numerous complications, but on the surface it looks fairly simple. I'd basically like to enable members to swipe/touch a smart card at the door to gain entry to the club, walk up to a touch screen PC and swipe/touch a card reader there to "login" to the system I create, which will allow them to book club facilities etc. I may even want that same card to then activate things like lights or music when they enter the room they've booked. Pretty Eutopian I know, but still, we'd like to get as close as we can. As I said, the software shouldn't be a problem, and on the hardware side, so far I'm looking at: All in one touch screen PC running Windows 7 or Ubuntu USB card reader (not sure which one to buy) Smart Cards (again, never bought these before) Door/lighting hardware that could be triggered (not sure here either) If anyone has any advice on implementing something like this - especially the items I'm not sure about above, and of course anything I've missed out that's crucial, I'd be most grateful. Recommended hardware that you've used for something like this would be fantastic!

    Read the article

  • wild card redirects issue giving error this webpage has a redirect loop

    - by david
    In my website I changed or better word modified the directory name ""vehicles-cars"" to ""vehicles-cars-for-sale"" when i tried to redirect using wild card redirect my old directory name to new directory name in my web hosting cpanel account. every time when i open pages from that directory i am getting error code. This web-page has a redirect loop. The website is php. The problem is that that my lots of pages from old directory are indexed in googles and they are getting duplicate contents. If I redirect single page it works perfect but there are lots of pages so I need wild card redirect to redirect whole directory . I really need some advice what to do with this problem. Here is .htaccess file code for redirect thanks RewriteEngine on RewriteCond %{HTTP_HOST} ^example\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.example\.com$ RewriteRule ^vehicles\-cars\/?(.*)$ "http\:\/\/example\.com\/vehicles\-cars\-for\-sale\/$1" [R=301,L] i have other wilcard redirect of whole directory with same code and its working perfect here is the code in .htaccss file which is same as above and working perfect for this directory RewriteCond %{HTTP_HOST} ^adsbuz\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.adsbuz\.com$ RewriteRule ^autos\/?(.*)$ "http\:\/\/adsbuz\.com\/vehicles\-cars\-for\-sale\/$1" [R=301,L] so i dont understand whats wrong with the above code please i really need some expert advice thanks again

    Read the article

  • Is Ubuntu a viable replacement of Windows XP for small enterprise environments?

    - by Alex. S.
    Hi all, I'm a newbie systems administrator, so any advice would be great. I would like to setup ubuntu 8.04 lts in a small office of consulting in management (around 50 workstations) instead of Windows XP. I would install MS Office 2007 via WINE (*). It would be a fresh installation, so the migration would be less of a pain. The new setup would also include a small server as document repository and a backup server by now. Later, I would install another goodies like a IM server, a document management solution, and whatnot collaborative tool. What do you advice in this scenario? Do you think is viable? Should I try to convince my managers this is a good idea? I consider myself as a fair experienced user in both systems, and I'm the only guy in charge of everything. I need to cut costs down, and I think that antivirus and antimalware software are a waste of money and time. Is this good idea?, or should I resign and try to lock down the Windows systems and install AV software? Is there anything else in this setup I'm not foreseeing? (*) The only catch in my test machine until now had been that Office SmartArt doesn't work properly, the rest of Office 2007 may seem ok.

    Read the article

  • EC2 kernel decision and issues with creating a new machine with my AMI

    - by roacha
    I could really use some advice. I started a new instance on EC2 using Amazon's AMI and during the deployment process I selected a Kernel ID of "Use Default". I then configured my server the way that I wanted to and took a snapshot of it. I then created my own AMI to create new servers with. When I try and create a new server with this AMI the server fails to start and I get the error: EXT3-fs: sda1: couldn't mount because of unsupported optional features (240). Which appears to happen because I am selecting a kernel id of "Use default" again when building my second server. I have read that in order for this to work I need to choose the same kernel id that was used in my original server. I have deleted my original server and don't know what it was using. What is the best process to follow in order to not have these issues? Should I choose "Use Default" for my original server? How do you know which kernel it selected? Then should I just document this and always specify this during the deployment of my next servers using my custom AMI? OR should I choose a custom kernel id during the initial build and always use this one moving ahead hoping Amazon never retires it? Thanks for any advice!

    Read the article

  • Assembly Level Language? Unlock iPhone 3GS with latest Baseband. Need Opinion

    - by getkenny
    Hi Guys, So its more like advice i need. I got 2 iPhone 3GS (Bootloader 06.02 and BB 05.11) which are lying around useless cause it was bought it from US and now i am in Dubai. Cannot use the phone because there is no unlock. Now rather than waiting and relying on other people to provide a unlock for the baseband , i was thinking of learning what it takes to unlock a iPhone. I currently don't even know what i got to learn to do this. I understand from soem reading around that i will need to learn ARM to understand the baseband and try to find a exploit: is it correct? I really want to help people out in getting their iPhones working. Also the iPhones cost was $645 each (16GB) so its not like Apple is going to loose any money of it, the person who bought it for me thought that if your not buying with an AT&T contract it means that it is unlocked but it is not true. I need help, i am willing to learn and you guys are the best bunch around to give me advice. Regards.

    Read the article

  • W2003StdR2 server: DNS dysfunctional!

    - by Tor
    I hate to have to do this, but i feel up that creek with no... well, some of you might know. At the moment my one and only DNS server refuses to do Forwarding. The story is as goes: This site had 2 servers, one W2003SBS and an W2003StdR2. The SBS degraded over a short periode of time, and to not go down with it i decided to move all data over to the other server. This was of course an AD integrated site. Move went ok, the Std server removed from the domain, and the SBS put to rest. For the time being we decided to run the Std as a server only, and no AD. We renamed the internal domain to xxx.local, and set the server up with DNS, DHCP and installed WINS (not activated). Forwarding of DNS is to our ISP through a Netgear Firewall. The same address setup used as before. So - DNS server started and all went ok, clients reconfigured and hooked up and then - after a day's time - internet name resolution stopped working on the server! Nothing had changed, been altered, modified, nothing! What i now get when doing NSLOOKUP is just a 2 sec timeout response! And i have checked and looked, but to no avail. Anybody seen this behaviour before? And yes - ALL servicepacks have been updated on the server. I would be much obliged if anyone in here could lend an ear... and give advice! Thanks.... from Tor in Norway Today is the 14th, and i still have no resolution to this nagging problem. Anybody else got any advice in the matter? Please?

    Read the article

  • In Windows 7, why won't my display stay off despite the power settings saying it should?

    - by Jer
    I'm completely stumped by this. My simple use case is that when I'm in bed, I use a cordless mouse to browse the web, watch videos, etc. - the monitor is across the room. When I'm going to sleep, I want to shut the monitor off. I also want to be able to turn it back on in the morning. I just want to turn the monitor off and on using only the mouse. I thought of creating a power setting that turned the monitor off asap (the shortest amount of time is one minute; that's fine). I have one that does this. It worked great for almost a year on my old XP machine, and for about four months on my new Windows 7 laptop (which I essentially use as a desktop). All of a sudden a couple weeks ago, it just stopped working - my monitor won't turn off on its own anymore. Here are the settings: I tried other options. Based on the advice here I tried nircmd. This seemed great. I created a shortcut with the command line: "C:\Program Files\nircmd\nircmd.exe" cmdwait 1000 monitor off I click this, and in one second the monitor goes off. However about five seconds later it turns back on, and I've been extra careful to make sure the mouse isn't moving. I have no idea what's going on. Based on both of these things, my only guess is that something could be running in the background which somehow makes the computer think it's in use. I've tried killing as many programs as possible but I still get the same behavior. Any advice? I'm mainly curious about how to debug, but am open to other suggestions about turning the monitor off and on with just the mouse as well.

    Read the article

  • Combat server downtime by duplicating server and re-routing when main server is down

    - by Wasim
    I have a CentOS server which at times either crashes or gets attacked with DDOS. At the moment I have an off site backup which is filled up with 1.7TB of data. I'm currently paying as much for the backup as I am for the server and I was looking for advice from experienced people as to what option is best to proceed from here. Would it be a viable solution to ditch the offsite backup, and instead purchase an additional server which is an exact duplication of the first server. So if the first server is down, users are re-routed to the second server without noticing the first server is even down. This would create an automatic backup of the first server (albeit not offsite) and relinquish the need for the expensive offsite backup. Is the above solution a true solution to pricey backup or is offsite backup absolutely necessary? How would I go about doing this (obviously it's pretty complex so just links to some reading material or the terminology of the procedure would be great)? Appreciate the help and advice.

    Read the article

  • Microsoft Server 2003 Explorer shows duplicate local shares

    - by user52167
    Hi folks, I am new here and I could really use some advice please. I am having a problem with our file server. When I try to browse the shared folders using explorer, several of the shared folders all appear to have the same name. Whenever I attempt to rename one of the affected folders, all the affected folders name also change. Our File Server is Windows Server 2003 R2. I am logged on directly to the server using remote desktop. When I open the folder all is as it should be, the proper content is there and the address bar displays the correct folder name and path. The share names are correct, so everything that needs to access the shared folder/files can do so. Also when I browse to the folder using the command-line all it as it should be there too. The only issue seems to be the incorrect display name when browsing using explorer. Can anyone offer any advice or help as to how to resolve this issue please? It would be most appreciated. Thanks

    Read the article

  • How do I view source in Outlook 2010?

    - by Martin Duys
    This question has already been asked here; but the answer only gives advice on how to view the email header not the actual html source of the email. There is another question here, which I think may be caused by the same issue as mine, but does not have a satisfactory answer (the answer does not work for the person who posted the question) If I right click on the bottom of an email I can see the option to 'view source' but when I select it nothing happens. I have done a bit of research and came across a post for a much earlier version of Outlook that suggested adding something into the registry. I applied this advice, but it made no difference; but I'm pretty sure that applying this solution correctly for my circumstances will do the trick. When I first received this machine it had a demo version of UltraEdit installed. I uninstalled UltraEdit and installed Notepad++ instead. I am convinced that there is a registry entry that is pointing to UltraEdit as the default view for 'view source' in my email and I need to replace this entry with a reference to Notepad or Notepad++, but I don't know how to do this. Any suggestions?

    Read the article

  • What needs to be considered when setting up for Linux Development? [closed]

    - by user123586
    I want to set up a box for Linux development. I have a working linux install with the usual toolchain and an IDE. I'm looking for advice on how to approach structuring accounts and folders for development. As the Perl folks say "There's always more than one way to do it." Left to my own devices, I'll come up with several unproductive ways of doing it before figuring out what an experienced Linux programmer would think obvious. I'm not looking for instructions to follow for a specific set of tools or a specific software package. Instead, I'm looking for insight into what decisions need to be made and how to make them, with understanding of the advantages and disadvantages of each individual choice. These are some of the questions that come up: Where to put sources Where to put built object files and libraries where to install what to set in environment variables what compiler flags matter and how do you manage them across several types of builds what configuration entries to make in an IDE how to manage libraries to support multiple environments how to handle different build versions such as debug vs release, or cross platform builds If you are an experienced Linux developer, the answers to these questions may seem trivial and obvious. I'd like to learn to make decisions about these questions that result in as little manual configuration as possible, given some existing sources, a particular IDE, or no IDE at all, a paticular set of development libraries etc. At this point you're probably thinking: Can you be more specific? Sure. But remember that I'm trying to learn how to think about this stuff, not just follow a recipie for a specific set of results. Example: Setup a project that uses CMake for some of its components, autogen.sh followed by configure for others and just configure for a few more: debug builds without an IDE debug builds in NetBeans debug builds in Eclipse debug build in Visual Studio all of the above with release builds for Linux, Mac and Windows. So... **What are your thoughts on an approach that works for all four? Do you have any advice on what to read?**

    Read the article

  • Minecraft server hosting hardware specifications [on hold]

    - by Andrew Wright
    I am planning on purchasing a server to rent off Minecraft game servers, largely to friends. I am planning on purchasing a 128GB RAM server to save on colocation costs (as I am likely to need more than 32GB and would have to rent 2U of space...) I am hoping for some advice about the processing power needed to deal with this level of RAM. The servers will be run in a shared environment on linux in a VM to make backups easier. The server I have in mind is dual CPU. I have been considering at the low end dual Xeon E5-2609V2 Quad-Core 2.5Ghz, and at the high end dual Xeon E5-2650V2 Eight-Core 2.6Ghz. The difference between these is 6.4 GT/s and 8 GT/s and £3000 for the lower spec server, £4300 for the higher spec. I was hoping I could get advice about whether it is worth paying for the extra/higher speed processor or if I would be wasting my money? Thank you for any help - I appreciate that this is not directly related to professional system administration.

    Read the article

  • I really need help resolving a Window Vista BSOD (Blue Screen Crash) on my desktop

    - by anonymous
    Hi, thanks for taking the time to read this. I'll get straight to the details. My desktop is on the fritz; it keeps going to blue screen with the stop message of 0x0000007E immediately after the loading bar of vista, right before transitioning to the account selection screen. My desktop runs on a dual-core 32-bit processor with windows Vista Home(?) installed. I have 3 GB of ram as two separate modules, a 1GB acer module and a 2GB geil module. I have an ati video card, unfortunately I cannot recall the exact name but the chipset is ATI and the manufacturer is Sapphire and the card is on the lower end. My hard drive is 320GB (i think) partitioned into two. The C:\ partition is red lined, while the D:\ partition is still pretty empty. As per the advice of my friend, i tried restarting the system with the graphics card removed. Upon failure, i repeated the process removing one RAM module one at a time, but the system still failed to load. Vista would attempt to repair the system and it would initially report that the system was fixed, but vista really failed to fix the problem. After removing the memory modules, vista started to report it's inability to fix the problem. I tried running on safe mode and the driver listing would always stop at crcdisk.sys. I ran memory diagnostics using the windows memory diagnostic tool found in the screen after vista's failed attempt to fix the problem with no luck. the problem details are as such: Problem Event Name : StartupRepairV2 Problem Signature 01: AutoFailover 02: (vista's version number?) 03: 6 04: 720907 05: 0x7e 06: 0x7e 07: 0 08: 2 09: WrpRepair 10: 0 OS Version: 6.0.6000.2.0.0.256.1 Locale ID 1033 any correct advice would be appreciated as i really need my pc to work so i can work on my projects. kinda sad, but i'm college of computer science and i have no idea what to do :P

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >