Search Results

Search found 22879 results on 916 pages for 'case studies'.

Page 124/916 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Hosting and scaling a Facebook application in the cloud? [migrated]

    - by DhruvPathak
    We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Mechanics of reasoning during programming interviews

    - by user129506
    This is not the usual "I don't want to write code during an interview", in this question the assumption is that I need to write code during an interview (think about the level of rewriting the quicksort or mergesort from scratch) I know how the algorithm work or I have a basic idea of how I should start working from there, i.e. I don't remember the algorithm by heart I noticed that even on a whiteboard, I always end up writing bugged code or code that doesn't compile. If there's a typo, whatever I usually live with that.. but when there's a crash due to some uncaught particular case I end up losing confidence in my skills. I realize that perhaps interviewers might want to look at how I write code and/or how I solve problems rather than proof-compiling my whiteboard code, but I'd like to ask how should I approach the above problem in mental terms, i.e. what mental steps should I follow when writing code for an interview with the two bullet points above. There must be a unique and agreed series of steps I should follow to avoid getting stuck/caught into particular exception cases (limit cases) that might end up wasting my time and my energies rather than focusing on the overall algorithm for the general case. I hope I made my point clear

    Read the article

  • Is implementing an interface defined in a subpackage an anti-pattern?

    - by Michael Kjörling
    Let's say I have the following: package me.my.pkg; public interface Something { /* ... couple of methods go here ... */ } and: package me.my; import me.my.pkg.Something; public class SomeClass implements Something { /* ... implementation of Something goes here ... */ /* ... some more method implementations go here too ... */ } That is, the class implementing an interface lives closer to the package hierarchy root than does the interface it implements but they both belong in the same package hierarchy. The reason for this in the particular case I have in mind is that there is a previously-existing package that groups functionality which the Something interface logically belongs to, and the logical (as in both "the one you'd expect" and "the one where it needs to go given the current architecture") implementation class exists previously and lives one level "up" from the logical placement of the interface. The implementing class does not logically belong anywhere under me.my.pkg. In my particular case, the class in question implements several interfaces, but that feels like it doesn't make any (or at least no significant) difference here. I can't decide if this is an acceptable pattern or not. Is it or is it not, and why?

    Read the article

  • How to link data in different worksheets

    - by user2961726
    I tried consolidation but I can not get the following to work as it keeps saying no data consolidated. Can somebody try this dummy application and if they figure out how to do the following below can give me a step by step guide so I can attempt myself to learn. I'm not sure if I need to use any coding for this: In the dummy application I have 2 worksheets. One known as "1st", the other "Cases". In the "1st" worksheet you can insert and delete records for the "Case" table at the bottom, what I want to do is insert a row into the Case Table in worksheet "1st" and enter in the data for that row. What should happen is that data should be automatically be updated in the table in the "Cases" worksheet. But I can't seem to get this to work. Also if I delete a row from the table in Worksheet "1st" it should automatically remove that record from the "Cases" worksheet table. Please help. Below is the spreadsheet: http://ge.tt/8sjdkVx/v/0

    Read the article

  • Offlineimap -- push changes to all folders; only pull from INBOX folder

    - by g33kz0r
    I would like to be able to set up offlineimap to do the following Sync Remote/INBOX - Local Sync Local/Maildirs/* - Remote Possible? The use case here is: I download all new mail from my remote IMAP INBOX folder with offlineimap. offlineimap's posthook command calls a custom python script which does junk filtering, then sorts and categorizes my mail in the local INBOX folder to various local maildirs based on sender, etc. I read my mail with mutt and perhaps do some more categorization. ? Step 4 is what I'm after. I want offlineimap to push my local changes (categorization, filtering, deletion in the case of spam) back to the various folders on the imap server, but as you can see, there's no need for me to be pulling any changes from folders other than Remote/INBOX, as no changes happen on the IMAP server itself. I hope that's a clear explanation of the problem.

    Read the article

  • Need a Quick Sure Method to Produce a Formatted Explain Plan? This will help!

    - by user702295
    Please use the following on the production machine to get formatted explain plan and sql trace using the SLOW sql (e.g. 'T_COMB_LIST.COMB_ID = 216') or any other value that takes longer: -- Open new session is SQL*Plus */ -- Make sure you are using updated PLAN_TABLE -- This can be done by dropping it and recreate it by running: -- SQL> @?/rdbms/admin/utlxplan.sql) set lines 1000 set pages 1000 spool xplan_1.txt EXPLAIN PLAN FOR <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> @?/rdbms/admin/utlxplp spool off EXIT --Open a second session is SQL*Plus ALTER SESSION SET max_dump_file_size = unlimited; ALTER SESSION SET tracefile_identifier = '10046'; ALTER SESSION SET statistics_level = ALL; ALTER SESSION SET events '10046 trace name context forever, level 12'; <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> select 'verify cursor closed' from dual; ALTER SYSTEM SET EVENTS '10046 trace name context off'; EXIT Make sure spooled file is formatted properly and that the 10046 trace has relevant explain plan in it.  Please Upload both files (10046 trace is generated in udump). Need instructions to find udump?   sqlplus "/ as sysdba" show parameters dump_dest This will show you bdump, cdump and udump locations.

    Read the article

  • Testing a codebase with sequential cohesion

    - by iveqy
    I've this really simple program written in C with ncurses that's basically a front-end to sqlite3. I would like to implement TDD to continue the development and have found a nice C unit framework for this. However I'm totally stuck on how to implement it. Take this case for example: A user types a letter 'l' that is captured by ncurses getch(), and then an sqlite3 query is run that for every row calls a callback function. This callback function prints stuff to the screen via ncurses. So the obvious way to fully test this is to simulate a keyboard and a terminal and make sure that the output is the expected. However this sounds too complicated. I was thinking about adding an abstraction layer between the database and the UI so that the callback function will populate a list of entries and that list will later be printed. In that case I would be able to check if that list contains the expected values. However, why would I struggle with a data structure and lists in my program when sqlite3 already does this? For example, if the user wants to see the list sorted in some other way, it would be expensive to throw away the list and repopulate it. I would need to sort the list, but why should I implement sorting when sqlite3 already has that? Using my orginal design I could just do an other query sorted differently. Previously I've only done TDD with command line applications, and there it's really easy to just compare the output with what I'm expected. An other way would be to add CLI interface to the program and wrap a test program around the CLI to test everything. (The way git.git does with it's test-framework). So the question is, how to add testing to a tightly integrated database/UI.

    Read the article

  • Best Practices PHP mvc routing

    - by dukeofweatherby
    I have a custom MVC framework that is in a constant state of evolution. There's a long standing debate with a co-worker how the routing should work. Considering the following directory structure: /core/Router.php /mvc/Controllers/{Public controllers} /mvc/Controllers/Private/{Controllers requiring valid user} /mvc/Controllers/CMS/{Controllers requiring valid user and specific roles} The question is: "Where should the current User's authentication be established: in the Router, when choosing which controller/directory to load, or in each Controller?" My argument is that when authenticating in the Router, an Error Controller is created instead of the requested Controller, informing you of your mishap; And the directory structure clearly indicates the authentication required. His argument is that a router should do routing and only routing. Leave it to the Controller to handle it on a case by case basis. This is more modular and allows more flexibility should changes need to be made by the router. PHP MVC - Custom Routing Mechanism alluded to it, but the topic was of a different nature. Alternative suggestions would be welcomed as well.

    Read the article

  • Research useful for getting a job?

    - by Twirling Hearth
    I have recently started a BS program in Computer Science, in order to improve my employment prospects. I already possess a Master's in sociology (as part of a PhD program that I left early because I could not possibly sustain interest any longer). As such, I am trying to find my way in the grand world of computers. One option that has been suggested to me in the past is something to do with social networking. I already have a strong social sciences background, and my knowledge of programming is increasing as I go through my studies. I know there are some people in my city (Boston) who are doing research in that area, so it's possible I could get someone to take interest in me. For that matter, because research is something that I'm pretty good at, it's an option I'm considering, career-wise. I just have one question, is it a worthwhile use of my time career-wise? I have no burning intellectual passion for that topic, but I'm perfectly happy to do it, if it means $$$. Your thoughts are welcome.

    Read the article

  • More NASM with GVim

    - by MarkPearl
    Today I am bashing around with nasm again… some useful things I found… Set the current working directory of gvim to the current file path I have found setting the current working directory of gvim to the file location is very useful, especially if you are wanting to use commands in gvim to run your compiled code. It can be done by typing in the following in the command mode in gvim… cd %:p:h Once you have set it, you can use the ! to run commands you would normally run in the dos shell.. e.g. !dir Compiling code to make an executable There are three thing you need to specify to compile a basic file in name, they are… The output file format The output file name The source file name An example of this would be the following (where you have a file called temp.asm which is the source file) nasm –f bin temp.asm –o temp.com Output file format The –f specifies the output file format (in this case a binary file). To get a list of the available output file formats you can type nasm –hf (for my installation bin is the default, in which case I can omit it) Output file name This is just the name you want the compiled file to be called. For windows machines I specify .com as my default format.

    Read the article

  • I have problem with pc. Random reboots with post hyper transport sync flood error

    - by user29867
    I have problem with my pc. Random reboots with post "hyper transport sync flood" error. It happens completely random. Most times i watch movie, some times when i play a game, even when i browse internet. My spec are as follow: MB ASUS M4A78-E CPU Phenom II 940 Sapphire Radeon hd4850 Vapor-X RAM OCZ DDR2 PC2-8500 kit 2x2GB PSU CM Real Power M520W CASE CM Centurion 590 Temps of MB, CPU and 4850 are in normal range. MB is hooter, around 60-70°C under load, Radeon 60-65°C under load (playing game for couple hours). CPU dos not past 50-55°C. So i think its not a cooling problem, CM case is pretty good, and have a lots of fans. I try with this memmory: TWIN2X4096-6400C5DHX same problem.

    Read the article

  • Server drives: 2.5" SCSI less reliable than 3.5" ?

    - by Bill
    Just had an HP 2.5" SAS 10k drive fail on a RAID5 array after about 2.5 years. It made me wonder if this was a fluke or an indication that 2.5" drives are less reliable than 3.5" SAS drives. I've had many 3.5" SAS drives running for many years without any issues (knock on wood). I would think that smaller drives would generate less heat and therefore be more reliable, but couldn't find any evidence of this. I realize all drives will eventually fail and that it's a crap shoot with any particular model, but was hoping someone could point out some related studies or comment on the SCSI drive sizes they've found to be most reliable in servers. Thanks.

    Read the article

  • Web hosting company basically forces me to use their domain name [closed]

    - by Jinx
    I've recently stumbled upon an unusual problem with one of hosting companies called giga-international.com. Anyway, I've ordered com.hr domain from Croatian domain name registration company, and my client insisted on using this host provider as couple of his friends already are hosted with them. I thought something was fishy when the first result on Google for Giga International was this little forum rant instead of their webpage. When I was checking their services they listed many features etc... space available, bandwidth etc. I just wanted to check how much ram do I get for my PHP scripts so I emailed them, and they told me that was company secret. Seriously? Anyway, since my client still insisted on hosting with them I've bought their Webspace package. During registration I had to choose free domain name because I couldn't advance registration without it. Nowhere was said, not even in general terms and conditions that I wouldn't be able to change that domain name. At least not for double the price of domain name per year. They said I can either move my domain name over to them (and pay them domain registration), or pay them 1 Euro per month for managing a DNS entry. On any previous hosting solution I was able to manage my domain names just by pointing my domain to their name servers, and this is something completely new and absurd for me. They also said that usual approach is not possible because of security and hardware limitations. I'd like to know what you guys think about this case, and should I report, and where should I report this case. In short. They forced me to register free domain name which doesn't suit my needs in order to register for their webspace package, and refuse to change domain name for my account until I either transfer domain to them or pay them DNS management which costs double the price of the domain name per year.

    Read the article

  • Unity is broken after upgrading to 12.10 (Optimus laptop)

    - by SyS
    I upgraded to GNU/Linux Ubuntu 12.10 but have been unable to use Unity properly afterwards. Indeed, I encountered the exact same problem as a lot of people: the Unity side and top bars are not displaying, although in my case, Unity seems completely broken, as I can't even right-click. However, in my case, it's worth noticing that I have an Optimus laptop with a Nvidia graphics card (GeForce GT 540M). Bumblebee and its 'optirun' command is working just fine, as usual, after the upgrade. I tried several things, as resetting Compiz and Unity (with the command 'setside unity') -- which works but I have to do it everytime I boot and it resets all my settings -- updating/reinstalling/reconfiguring my Nvidia drivers as well as bumblebee, trying with Nouveau drivers instead of nvidia-current, check if linux-headers-generic were installed (they were). However, I couldn't reset xorg.conf files as they're just not there. There is neither xorg.conf file, nor its backup in /etc/X11. I think this is where the problem comes from, although I'm far from an expert. Maybe retrieving a xorg.conf file will fix this mess, but I have no idea how to do that. I'm just tired and don't know what to do. So, here I am, begging for your help.

    Read the article

  • Language Design: Are languages like Python and CoffeeScript really more comprehensible?

    - by kittensatplay
    The "Verbally Readable !== Quicker Comprehension" argument on http://ryanflorence.com/2011/case-against-coffeescript/ is really potent and interesting. I and I'm sure others would be very interested in evidence arguing against this. There's clear evidence for this and I believe it. People naturally think in images, not words, so we should be designing languages that aren't similar to human language like English, French, whatever. Being "readable" is quicker comprehension. Most articles on Wikipedia are not readable as they are long, boring, dry, sluggish and very very wordy. Because Wikipedia documents a ton of info, it is not especially helpful when compared to sites with more practical, useful and relevant info. Languages like Python and CoffeScript are "verbally readable" in that they are closer to English syntax. Having programmed firstly and mainly in Python, I'm not so sure this is really a good thing. The second interesting argument is that CoffeeScript is an intermediator, a step between two ends, which may increase the chance of bugs. While CoffeeScript has other practical benefits, this question specifically requests evidence showing support for the counter-case of language "readability"

    Read the article

  • 724% Return on an SFA project with Oracle Sales Cloud and Marketing Cloud combined!

    - by Richard Lefebvre
    Oracle Sales Cloud and Marketing Cloud customer Apex IT gained just that?a 724% return on investment (ROI) when it implemented these Oracle Cloud solutions in its fast-moving, rapidly-growing business. Apex IT was just announced as a winner of the Nucleus Research 11th annual Technology ROI Awards. The award, given by the analyst firm, highlights organizations that have successfully leveraged IT deployments to maximize value per dollar spent. Fast Facts: Return on Investment – 724% Payback – 2 months Average annual benefit – $91,534 Cost : Benefit Ratio – 1:48 Business Benefits In addition to the ROI and cost metrics the award calls out improvements in Apex IT’s business operations—across both Sales and Marketing teams: Improved ability to identify new opportunities and focus sales resources on higher-probability deals Reduced administration and manual lead tracking—resulting in more time selling and a net new client increase of 46% Increased campaign productivity for both Marketing and Sales, including Oracle Marketing Cloud’s automation of campaign tracking and nurture programs Improved margins with more structured and disciplined sales processes—resulting in more effective deal negotiations Read the full Apex IT ROI Case Study. You also can learn more about Apex IT’s business, including the company’s work with Oracle Sales and Marketing Cloud on behalf of its clients. You can point your prospects and customers to the CX blog for a similar recap of the Apex IT award and a link to the Case Study.

    Read the article

  • Using ConcurrentQueue for thread-safe Performance Bookkeeping.

    - by Strenium
    Just a small tidbit that's sprung up today. I had to book-keep and emit diagnostics for the average thread performance in a highly-threaded code over a period of last X number of calls and no more. Need of the day: a thread-safe, self-managing stats container. Since .NET 4.0 introduced new thread-safe 'Collections.Concurrent' objects and I've been using them frequently - the one in particular seemed like a good fit for storing each threads' performance data - ConcurrentQueue. But I wanted to store only the most recent X# of calls and since the ConcurrentQueue currently does not support size constraint I had to come up with my own generic version which attempts to restrict usage to numeric types only: unfortunately there is no IArithmetic-like interface which constrains to only numeric types – so the constraints here here aren't as elegant as they could be. (Note the use of the Average() method, of course you can use others as well as make your own).   FIFO FixedSizedConcurrentQueue using System;using System.Collections.Concurrent;using System.Linq; namespace xxxxx.Data.Infrastructure{    [Serializable]    public class FixedSizedConcurrentQueue<T> where T : struct, IConvertible, IComparable<T>    {        private FixedSizedConcurrentQueue() { }         public FixedSizedConcurrentQueue(ConcurrentQueue<T> queue)        {            _queue = queue;        }         ConcurrentQueue<T> _queue = new ConcurrentQueue<T>();         public int Size { get { return _queue.Count; } }        public double Average { get { return _queue.Average(arg => Convert.ToInt32(arg)); } }         public int Limit { get; set; }        public void Enqueue(T obj)        {            _queue.Enqueue(obj);            lock (this)            {                T @out;                while (_queue.Count > Limit) _queue.TryDequeue(out @out);            }        }    } }   The usage case is straight-forward, in this case I’m using a FIFO queue of maximum size of 200 to store doubles to which I simply Enqueue() the calculated rates: Usage var RateQueue = new FixedSizedConcurrentQueue<double>(new ConcurrentQueue<double>()) { Limit = 200 }; /* greater size == longer history */   That’s about it. Happy coding!

    Read the article

  • Algorithm for grouping friends at the cinema [closed]

    - by Tim Skauge
    I got a brain teaser for you - it's not as simple as it sounds so please read and try to solve the issue. Before you ask if it's homework - it's not! I just wish to see if there's an elegant way of solving this. Here's the issue: X-number of friends want's to go to the cinema and wish to be seated in the best available groups. Best case is that everyone sits together and worst case is that everyone sits alone. Fewer groups are preferred over more groups. Sitting alone is least preferred. Input is the number of people going to the cinema and output should be an array of integer arrays that contains: Ordered combinations (most preferred are first) Number of people in each group Below are some examples of number of people going to the cinema and a list of preferred combinations these people can be seated: 1 person: 1 2 persons: 2, 1+1 3 persons: 3, 2+1, 1+1+1 4 persons: 4, 2+2, 3+1, 2+1+1, 1+1+1+1 5 persons: 5, 3+2, 4+1, 2+2+1, 3+1+1, 2+1+1+1, 1+1+1+1+1 6 persons: 6, 3+3, 4+2, 2+2+2, 5+1, 3+2+1, 2+2+1+1, 2+1+1+1+1, 1+1+1+1+1+1 Example with more than 7 persons explodes in combinations but I think you get the point by now. Question is: What does an algorithm look like that solves this problem? My language by choice is C# so if you could give an answer in C# it would be fantastic!

    Read the article

  • Laptop crashes when connecting to external harddisk

    - by Gnot
    I recently had a problem with my laptop. when I booted up the machine, I would get a SMART failure error message and when I pressed F1 to continue, it would take a very long time to boot and it would come back to the same error message again. Thinking that my hard disk was dying, I bought a new hard disk and installed on my laptop and so now my laptop is alright. However I need to recover data from that old hard disk, so I bought an external hard disk case and placed the old hard disk onto the case and connected to my laptop with USB. The first few times when I connected, I could see the files from the old hard disk and managed to copy some files over although it took extremely long to transfer. But now whenever I connect to the old hard disk, after a few minutes, my laptop will crash and re-boot. Do you think my old hard disk is dead beyond repair? Or you can offer some help here? Any assistance would be appreciated!

    Read the article

  • How can I have sound output before logging in?

    - by ??O?????
    I have a machine (Ubuntu 11.10) that I would like to have it play audio (typically through an amplifier), but the machine should be headless (where its final placement will be). I will control what is played through ssh. However, there is no sound output until I login to the graphical console. At first I thought it was an issue with pulseaudio, so I promptly removed it to use the default ALSA, but I have the same issues. I ssh to the machine, I run alsamixer and get the typical cannot open mixer: No such file or directory error (while /proc/asound/cards display correctly what I have). If I login on the graphical console, alsamixer works fine in the ssh session, and I have sound output. I logout, and then alsamixer stops working. So something runs (Xsession perhaps?) when I login that enables sound output, and gets disabled when I logout. I remember in older versions of Ubuntu, there was a drum roll when the machine showed the login screen; that is not the case anymore. Perhaps if I somehow can enable that drum roll, I'll have fixed my problem too. In any case, the question I ask is what the title says.

    Read the article

  • xl create doesn't bring up console

    - by ineff
    I've tryed to run VM in Xen 4.2 using xl command (for what I get this should be standard toolstack, while xm is deprecated). In this case I've the following configuration file kernel = '/media/home_separata/domU_kernel/boot/vmlinuz-linux' ramdisk = '/media/home_separata/domU_kernel/boot/initramfs-linux.img' name = "domU_Arch_linux" memory = "512" root = '/dev/xvda1 ro' disk = ['file:/media/home_separata/domU_kernel/arch_linux_kernel.img,xvda1,w'] vif = ['mac=aa:::10:11:f1,ip=192.168.0.2,bridge=xenbr0'] when I try to start the virtual machine with xl create it seems it works (it also bring up the vif interfaces) but if I try to connect via xl console it gives an error: xenconsole: Could not read tty from store: No such file or directory the fun fact is that the I've the problem inverse using xend/xm (in that case xend doesn't bring up vif interfaces but activate console). Does anyone have any suggestion?

    Read the article

  • Need recommendations for a hardy scanner that has a robust feeder tray

    - by JohnyD
    In the early days of our company all our information came in on paper and all of what we sold was on paper. Because of this we literally rent our an old bank vault to house the millions of sheets of paper that, some say, still contain relevant information. That being said, I'm looking into purchasing some hardware capable of scanning all these documents and converting them to pdf. Being new at this level of digitization I would like to ask for recommendations for accomplishing this task. Most of this material exists as separate bound studies/articles/etc. Someone would have to remove the bindings and be able to load many pages at a time and have the scanner feed them all through and convert them to a single pdf (single pdf per study/article/etc). If you have any recommendations I would very much appreciate hearing about them, thanks.

    Read the article

  • Python: how to calculate data received and send between two ipaddresses and ports [closed]

    - by ramdaz
    I guess it's socket programming. But I have never done socket programming expect for running the tutorial examples while learning Python. I need some more ideas to implement this. What I specifically need is to run a monitoring program of a server which will poll or listen to traffic being exchange from different IPs across different popular ports. For example, how do I get data received and sent through port 80 of 192.168.1.10 and 192.168.1.1 ( which is the gateway). I checked out a number of ready made tools like MRTG, Bwmon, Ntop etc but since we are looking at doing some specific pattern studies, we need to do data capturing within the program. Idea is to monitor some popular ports and do a study of network traffic across some periods and compare them with some other data. We would like to figure a way to do all this with Python....

    Read the article

  • Resume on 30 Days of SharePoint

    Dear readers, as you might have noticed... It was an organisational desaster on my end! Even though I continued my studies and research on Microsoft SharePoint 2013 during the last 30 days, I wasn't able to write an article a day to keep you posted on my progress. Nonetheless, I gathered a good number of additional blogs, mainly SharePoint MVP sites, and online forums which will be helpful in the next couple of weeks while I'm actually going to develop a C#-based client which will enable an existing 'legacy' application to SharePoint as a document management system (DMS) besides other already existing solutions. Finding excuses Well, no. Not really. I simply didn't block any or enough time every day to write down my progress during my own challenge. My log book on learning about SharePoint stands at 41 hours and 15 minutes during this month. Which means that I spent an average of more than 1 hour per day on getting into SharePoint. I know that might sound a little bit low but also keep in mind that I went for the challenge on top of my daily job and private responsibilities. During the same period there had been two priority 0 incidents from clients - external root cause - which took presedence over this leisure project. More to come Anyway, it was a first trial and despite the low level of reporting on my blog, I'm confident about what I learned during the last 30 days, and I'm ready to implement the client's requirements. At least, I would say that I have a better understanding about the road map or the path to walk during the next month. As time and secrecy allows I'm going to note down some bits and pieces... During the process of development, I'm going to 'cheat' on the challenge summary article and add links to those new entries. Just for the sake of completeness. Next challenge? Hmm, there had been ideas during the last meetup of the Mauritius Software Craftsmanship Community (MSCC) regarding certifications in IT and eventually we might organise some kind of a study group for specific exams, most probably Microsoft exams towards MCSD Web Developer or Windows Developer.

    Read the article

  • Which PSU should one chose? The biggest is the best?

    - by Shiki
    I'm fully aware of PSU's "Active PFC" and that they won't consume the written W all the time. (Makes sense). But now I'm before a PSU replacement (Guys: NEVER buy a Chieftec. Seriously.) The question is: If one can get a bigger one (in my case 750W and 650W) ... should that person go for the bigger one ? (The difference in price is not much). No, I don't think I'll soon use all that much. (Please help (if you want of course) to make the question more generic if the question is really not OK in this form. I've been wondering about this for a time already. In my case it would be XFX Black Edition Silver 750W and 650W) (Basically about "which one" I would go with XFX/Antec/something which comes with industry qualified parts. Like Duracell but in a PSU. :) But the performance is a different thing.)

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >