Search Results

Search found 24350 results on 974 pages for 'bug a lot'.

Page 509/974 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • new vhost - main host AWstats

    - by vn
    Hi, I just began working at this new job and I have to config a new host for stats with awstats. I once used awstats on my own server, no biggie. Now, I'm on a multi-sites server with the acces_log files nicely splitted. I copied a awstats.conf file from one of the sites that already has (working) stats. I changed the LogFile and SiteDomain values as mentioned from http://awstats.sourceforge.net/docs/awstats_setup.html#BUILD_UPDATE, saved the conf and ran the commands perl awstats.pl -config=mysite -update and perl awstats.pl -config=mysite -output -staticlinks awstats.mysite.html (yes I changed it with my infos...) PROBLEM IS : whenever I try to access the html file or the dynamic page (with the config option on awstats.pl like my working site does), I get the stats of the MAIN site from access.log itself (and not access_log-mysite) from what it says at the top of the page and from the hostname on the left tab (stats for mysite.com)... what did I do wrong? There's no errors from what I see... Thanks a lot for any help

    Read the article

  • What expectation should I have of South African web development rates / duration? [closed]

    - by Warren van Rooyen
    I am a developer but only intend on doing the front-end work for getting Reddit-like upvote / downvote functionality going on an upcoming site I'm building. I have never had to contract a developer for back-end work to implement code for me so I am quite in the dark on how much I should expect to pay and how long it could take to get the site going. I could be taken for a ride as the developer could distort the time it would take at a seemingly regular rate (hourly/day) or could otherwise distort their rate. Please could you give me help on this. I know you need some guidance on the nature of the site so here it is. I have a Reddit type template with CSS and PHP included. I then downloaded Pligg code that's intended to do the job of the Reddit upvote downvote functionality. How long would a developer roughly need to unite the theme and front end with the back-end functionality? I do understand it's not a lot of info but I'm sure you're experienced enough to have an instinct for the size of the project. Also, should I work on an hourly/day rate/ project payment agreement?

    Read the article

  • how to install 13.04 on a partitioned hardrive

    - by Denny
    First, not a computer literate person, not even a novice- so please use small words. I recently made the switch to ubuntu, it came preloaded on my new laptop that I order from a big tech dot com site. The version on it is 12.04 (i think) and 64bit. This system has a lot that I like but it is quirky for me to say the least. Apparently I have held broken packages and have no way of knowing how to find them. I discovered this when trying to download (from software center) VLC so that I could watch some movies I had on an external hard-drive. Unmet dependencies error and held broken package errors abound while trying to fix the problem. Ive scoured this site and other and followed almost all the suggestions to a T but still I am unable to fix anything. My computer is partitioned (but I don't even know how to get to the otherside so to speak). I would like to know; can I put the newer 13.04 OS on one side of the partition and then delete the older version on the other side? or, can I install 13.04 over the existing 12.04? What would I need to do this? An obstacle that I have is this, I am currently serving in Afghanistan so going someplace to buy something or running down to a computer store for service support is out of the question. I very much appreciate your help, cause right now this computer is nothing more than a word processor, which would be fine if all i wanted was a word processor. Thanks in advance.

    Read the article

  • How to get initial API right using TDD?

    - by Vytautas Mackonis
    This might be a rather silly question as I am at my first attempts at TDD. I loved the sense of confidence it brings and generally better structure of my code but when I started to apply it on something bigger than one-class toy examples, I ran into difficulties. Suppose, you are writing a library of sorts. You know what it has to do, you know a general way of how it is supposed to be implemented (architecture wise), but you keep "discovering" that you need to make changes to your public API as you code. Perhaps you need to transform this private method into strategy pattern (and now need to pass a mocked strategy in your tests), perhaps you misplaced a responsibility here and there and split an existing class. When you are improving upon existing code, TDD seems a really good fit, but when you are writing everything from scratch, the API you write tests for is a bit "blurry" unless you do a big design up front. What do you do when you already have 30 tests on the method that had its signature (and for that part, behavior) changed? That is a lot of tests to change once they add up.

    Read the article

  • Puppet performance compared to cfengine

    - by Andy
    I'm considering using Puppet or cfengine. Key factors are performance, and research on the internet suggests cfengine uses less memory and CPU cycles compared to puppet. However, puppet seems easier to use. I need to manage several web servers, as well as handheld tablets and machines that will only connect to some central control servers periodically. All are Linux machines. Would I be able to use either puppet or cfengine for this? And if so, does puppet still make poor use of resources? I'd like to use puppet because it seems simpler, but a lot of the articles I've found refer to cfengine 2 - is cfengine 3 easier to configure? Thanks

    Read the article

  • Too many connectons to 212.192.255.240

    - by Castor
    Recently, my Internet slowed down drastically. I downloaded a tool to see the TCP/IP connections from my Vista computer. I found out that a lot TCP/IP connections are being connected to 212.192.255.240 through SVCHost. It seems that it is trying to connect to different ports. I think that my computer is being infected with some kind of malware etc. But I am not sure how to get rid of it. I did a little bit of research on this IP but found nothing. Any suggestions are highly apprecitated.

    Read the article

  • 50 Years After The Jetsons

    - by Jason Fitzpatrick
    The Jetsons, the future-oriented animated cartoon series from the 1960s, turned 50 this week. The Smithsonian takes a look at what the show meant, then and now. At the Smithsonian blog Paleofuture, Matt Novak looks back at the last 50 years and the impact that The Jetsons had. He writes: It’s important to remember that today’s political, social and business leaders were pretty much watching ”The Jetsons” on repeat during their most impressionable years. People are often shocked to learn that “The Jetsons” lasted just one season during its original run in 1962-63 and wasn’t revived until 1985. Essentially every kid in America (and many internationally) saw the series on constant repeat during Saturday morning cartoons throughout the 1960s, ’70s and ’80s. Everyone (including my own mom) seems to ask me, “How could it have been around for only 24 episodes? Did I really just watch those same episodes over and over again?” Yes, yes you did. But it’s just a cartoon, right? So what if today’s political and social elite saw ”The Jetsons” a lot? Thanks in large part to the Jetsons, there’s a sense of betrayal that is pervasive in American culture today about the future that never arrived. We’re all familiar with the rallying cries of the angry retrofuturist: Where’s my jetpack!?! Where’s my flying car!?! Where’s my robot maid?!? “The Jetsons” and everything they represented were seen by so many not as a possible future, but a promise of one. Hit up the link below for the full article–prepare to be surprised at just how few episodes of the show were ever animated and aired. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • How to automaticaly mount luks-partition only when disk is plugged in

    - by Frederick Roth
    I have the following scenario: I want to automatically backup some data from my Laptop(Fedora Core 17) to a external encrypted(luks) hard disk. The disk can be opened by a key file, which lies on the also encrypted root partition of my laptop. The hard disk is attached to my docking station and therefore only "present" when I am at home (which is approximately 1/2 of the time the Laptop runs) I have everything set up the way I want it with one exception. I don't get a decent way to mount the hard disk automatically at boot if and only if it is present. If I add it to crypttab and fstab without noauto it tries to mount it at boot and takes a lot(!) of time and error messages when it is not present. If I add noauto, well it does not mount automatically ;) Is there a way to configure luks/crypttab to do the following: check whether the disk is present if yes: decrypt/mount if no: just don't

    Read the article

  • unable to access shared drives n win7

    - by colin
    OK, this is doing my head in. Not your usual Win7 sharing problem. Just built a win 7 comp to go onto my home lan and be accessed by all on the network, mostly running XP SP3. Installed it as a single HD+DVD system, got it happy, then added my storage drives, set them up in the right order and letters, rebooted, and shared them. Couldn't access the computer at all from any XP machine. Set the pasword thingy in win7 to NO, and now I can "see" all four shared drives from any machine, but can access only two of them. Please tell me whats going on as I've done nothing different from one drive to the other, just installed and set up drive letters as normal, then shared the damn things. The odd thing about it? the two 1.5T drives that have a lot of data on board are accessable, the two nearly empty 500G ones are not. ANY ideas? Cheers, Colin

    Read the article

  • How different is WPF from ASP.NET [closed]

    - by Tom
    I have been quickly moved over to a different project at work because the project needs more help. I was chosen because they are confident in my abilities and they thought I would be best fit for the next couple weeks to help finish the application out. I am a little nervous. I do tend to pick things up quickly. I was moved to a different project at the beginning of this year and now I know it like the back of my hand. Previously I was on another project. Both of these projects were an ASP.NET web application, which I believe is considered a winforms web application? The project I am moving to is a desktop WPF application. I have read that many people enjoy developing their applications with WPF. I just have never dealt/worked with WPF before. I like to consider myself pretty good at ASP.NET/C# and I do a solid job. We deal with a lot of data processing from the database and report generation. So I do get to experience C# more so than some web applications where the C# end of it is mostly just event driven and simple instructions. How different are the two? Will it be completely foreign to me? Or is it just a different way of looking at a problem and I can familiarize myself quickly? Thanks for the input.

    Read the article

  • What is a good support knowledge base tool?

    - by Guillaume
    I have been searching for a tool to help my team organize its knowledge for resolving recurring support cases. I know this question will probably be closed, but I'll try my change anyway because I know that I can have some good answers about that. Context: our team is developing and supporting an huge applications (lots of different screens and workflow processes. We already have a good tool for managing our documentation, but we are struggling with support cases. Support action involve often quite a lot of manual steps to fix stuff and the knowledge for these actions is more 'oral transmission' than modern tools. We need an efficient way to store them in a knowledge base to be able to retrieve similar cases based on patterns (a stacktrace, an error message, a component name, a workflow step, ...) and ranked by similarity. Our wiki search is not very powerful when it come to this kind of search and the team members don't want to 'waste' time writing a report that will never be found... Do you know efficient knowledge base tool for this kind of use case ?

    Read the article

  • Forgot to unmount/eject external hard drive, lost moved files. Mac OS X

    - by balupton
    So I was using my Mac with my external hard drive connected via USB. I moved about 10 GB of data to it (via drag and drop while holding down the Command key to move the files rather than to copy them). They moved to the drive all right, but as I was having some issues and the Finder crashed after the transfer, I was unable to eject the volume and later everything froze so I had to do a hard restart (hold the power button). When I remounted the volume (plugged the external hard drive back in) it no longer had any of the files which I moved onto it. As it was a lot of data, how can I recover these files?

    Read the article

  • Small change in MVVM Light Toolkit templates for Blend 4 RC

    - by Laurent Bugnion
    Ah, the joy of new releases… You will find that the MVVM Light Toolkit works fine with Visual Studio 2010 RTM and Blend 4 RC except for a few adjustments: Blend templates The path to the Expression Blend 4 project templates changed. If you start Expression Blend 4 RC now, you will likely not see the MVVM Light templates in the New Project dialog.   New Project dialog with MVVM Light To restore the templates, follow the steps: Open Windows Explorer Navigate to C:\Users\[username]\Documents\Expression (or simply type My Documents in Windows Explorer and then open the Expression folder). Change the name of the “Blend 4 beta” folder into “Blend 4”. That’s it, you should now see the templates in the New Project dialog in Blend 4. Note that since the new name is “Blend 4”, I hope that I won’t need to do the same exercise when Blend 4 RTM is released! Windows Phone 7 templates Since the Windows Phone 7 tools are not ready yet for Visual Studio 2010 RTM and Blend 4 RC, the templates in the Silverlight for Windows Phone folders will not work. You will get an error if you try to create a new such project in the newly released environment. I hesitated to remove these templates from the current packages, but honestly that is a lot of trouble for a very short time before the tools for Windows Phone 7 are released (note: I don’t have any information as to when these tools will be released). In the mean time, just don’t create a WinPhone7 application. Reminder: If you want to write code for Windows Phone 7, you need to keep the Visual Studio 2010 RC as well as Expression Blend 4 beta. Updated package I uploaded an update to the Blend 4 templates. It is available like before on the “Install manually” page and on the Codeplex page.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Vista won't boot - just get black screen

    - by DisgruntledGoat
    When I boot into Windows Vista (Ultimate), I just get a black screen (with the mouse visible and working). If I run in safe mode, it seems to pause for a while when loading crcdisk.sys. A lot of research says it could be a problem with the hard drive, but I dual boot Ubuntu and that works fine and I can still see and use the Windows partition absolutely fine from Ubuntu. I tried using the "startup repair" option on the Vista install disk but it didn't detect any problems. I have run chkdsk several times, with this: chkdsk C: /f /r And also drive D (the recycle bin). The first two times it detected and fixed errors on the C drive but now it doesn't detect any errors. Is there anything else that could be causing this problem?

    Read the article

  • Not enough storage is available to process this command

    - by Mohit
    I am getting this error on almost all of the operations on a Windows 7 pro 32 bit machine. By operations I mean anything I do. Update a repo from subversion. Access a local IIS Site. Copy a big folder. Run an installer.and sometime if I try again. It get solved. I think there is something wrong wit windows7 . I searched around and found posts suggesting to increase IRPStackSize value in registry I did that no Luck. I am using Microsoft Security Essentials Version: 1.0.1961.0 as my antivirus package Once this errors starts popping up. I have to restart and then in after some random time. It starts showing up again. Any help is appreciated. I am losing lot of my time in restarting my system or retrying again and again.

    Read the article

  • Lightest Linux Desktop supporting Firefox/graphic browser

    - by Susan Mayer
    I am on Windows and I have a remote server with Ubuntu 10.10. I want to use Firefox or other graphic browser on that remote server. The problem is, the server's memory is only 512MB, so I can install larger desktop environment. I used to use XFCE and NoMachine NX, but they consume too much memory on that Ubuntu server. The only thing I want to use is a graphic browser (for example firefox) on that server. Nothing else. Do you have any good suggestions? Thanks a lot!

    Read the article

  • compTIA-AT EXAM

    - by SysPrep2010
    Hello everyone, I have been in the IT field only for two years. I have been dealing with servers, firewalls, routers, switches, backup servers, and desktop. For the desktop, i have been dealing with WDS (Window deployment services). Not a lot of hardware. My question is this, is it really important to have an AT cert under your belt. I dont see the point anymore. When a desktop goes down, what have been seeing, they just buy a new one. I mean I can rebuild systems they are fun, but I haven't in a while?

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Micro sound breaks/interrupts on Windows 7

    - by cand
    Hello all, I've been experiencing strange behavior recently. When listening to mp3 or watching movie or whatever that uses sound, I get micro breaks in sound. It's like it hangs or cuts a fragment for about 0.5s. When I start OS, it's ok, but as time passes it gets worse, to the extent that music is unlistenable being interrupted every 2 seconds. I haven't found any relevance between this behavior and hardware usage, I don't think it's directly related to HDD (or it might be but with significant delay). I have updated soundcard drivers and it didn't help a lot. My system is Windows 7, computer is simple HP laptop, nx7300-Ru374ES with WD Caviar Scorpio Blue hard drive inside and integrated soundcard on it (I can check the model later if it's important). Did anybody encounter such problem ? Maybe it's a common thing on Windows 7 or someone knows how to solve it? Thanks in advance.

    Read the article

  • SD card mounted as bootable image instead of FAT 32

    - by Benny Wong
    I have an SD card that I recently used to take photos on vacation. It have taken a lot of photos using it and worked fine on the camera. However, I had forgotten that a few months before this trip, I tried to make the SD card a bootable Xubuntu USB drive. So when I plugged my SD card in to copy the photos, the card mounts as the Xubuntu image, rather than mounting as the FAT32 drive with the images on it. The files must still be on the drive. Any ideas on how to fix this? (I'm using Mac OS X) Thanks!

    Read the article

  • What's the relationship between meta-circular interpreters, virtual machines and increased performance?

    - by Gomi
    I've read about meta-circular interpreters on the web (including SICP) and I've looked into the code of some implementations (such as PyPy and Narcissus). I've read quite a bit about two languages which made great use of metacircular evaluation, Lisp and Smalltalk. As far as I understood Lisp was the first self-hosting compiler and Smalltalk had the first "true" JIT implementation. One thing I've not fully understood is how can those interpreters/compilers achieve so good performance or, in other words, why is PyPy faster than CPython? Is it because of reflection? And also, my Smalltalk research led me to believe that there's a relationship between JIT, virtual machines and reflection. Virtual Machines such as the JVM and CLR allow a great deal of type introspection and I believe they make great use it in Just-in-Time (and AOT, I suppose?) compilation. But as far as I know, Virtual Machines are kind of like CPUs, in that they have a basic instruction set. Are Virtual Machines efficient because they include type and reference information, which would allow language-agnostic reflection? I ask this because many both interpreted and compiled languages are now using bytecode as a target (LLVM, Parrot, YARV, CPython) and traditional VMs like JVM and CLR have gained incredible boosts in performance. I've been told that it's about JIT, but as far as I know JIT is nothing new since Smalltalk and Sun's own Self have been doing it before Java. I don't remember VMs performing particularly well in the past, there weren't many non-academic ones outside of JVM and .NET and their performance was definitely not as good as it is now (I wish I could source this claim but I speak from personal experience). Then all of a sudden, in the late 2000s something changed and a lot of VMs started to pop up even for established languages, and with very good performance. Was something discovered about the JIT implementation that allowed pretty much every modern VM to skyrocket in performance? A paper or a book maybe?

    Read the article

  • Is there anyway to configure Google account for Windows 8 Sync?

    - by William.Ebe
    Recently upgraded to Windows 8. The problem is a lot of Sync features requires Microsoft account to linked. Even though I have one, I don't want to spare one for Sync purpose. Additionally, Google-Chrome's sync feature is absolutely charming. So I prefer Google for storing my data. So, 1) Is there anyway to replace Microsoft Sync with other accounts by patching or modifying any byte? 2) I want to replace Maps, Weather, and all possible apps with Google. I've see Mail can be done. What are the all other possible Metro apps?

    Read the article

  • Move files from multiple folders all into parent directory with command prompt [win7]

    - by Nick
    I have multiple .rar files in multiple folders like this: C:\Docs\Folder1\rarfile1-1.rar C:\Docs\Folder1\rarfile1-2.rar C:\Docs\Folder1\rarfile1-3.rar C:\Docs\Folder2\rarfile2-1.rar C:\Docs\Folder2\rarfile2-2.rar C:\Docs\Folder2\rarfile2-3.rar C:\Docs\Folder3\rarfile3-1.rar C:\Docs\Folder3\rarfile3-2.rar C:\Docs\Folder3\rarfile3-3.rar I want to move all of the .rar files to the parent directory 'C:\Docs'. I have a lot more than 3 folders, so I was thinking of making a batch file or something. What would be the commands to do this? Thanks

    Read the article

  • Error while compiling/installing PHP with FPM for RPM on Centos 5.4 x64

    - by Raymond
    Hi, I'm trying to make an RPM with PHP 5.3.1 and PHP-FPM 0.6 for CentOS 5.4. So far it goes quite well, but when rpmbuild gets to the installation phase it fails with the following error: Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.63379 + umask 022 + cd /usr/src/redhat/BUILD + cd /usr/src/redhat/BUILD/php-5.3.1/fpm-build/ + make install Installing PHP SAPI module: fpm Installing PHP CLI binary: /usr/bin/ cp: cannot create regular file `/usr/bin/#INST@12668#': Permission denied make: *** [install-cli] Error 1 error: Bad exit status from /var/tmp/rpm-tmp.63379 (%install) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.63379 (%install) I am running rpmbuild as a normal user, so it's understandable that it will fail to install anything into /usr/bin, but it shouldn't try to install anything outside the buildroot in the first place. I have however specified the BuildRoot in the header of the spec file and I can see it is passed correctly to the make install command. Does anyone have some idea of what is going wrong here? Thanks a lot!

    Read the article

  • What is Light Peak

    - by Jonathan.
    I've heard this a lot recently, todo with Apple and Intel. Some says it's a protocol, others say it's fibre optic, and others say it's copper. One source even said it was a "wireless wire". Apparently it can carry data, but not video streams, surely the cable can't know the difference between 1s and 0s representing data, and 1s and 0s representing video streams. Or it will replace all the wires we currently have except power, another place said it is for inside laptops. Those are just examples so I haven't given any sources, I just want to know what on Earth Light Peak is?

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >