Search Results

Search found 31774 results on 1271 pages for 'chris go'.

Page 730/1271 | < Previous Page | 726 727 728 729 730 731 732 733 734 735 736 737  | Next Page >

  • Regular Expressions Cookbook Ebook Deal of the Day

    - by Jan Goyvaerts
    Every day O’Reilly has an “ebook deal of the day” offering one or a bunch of their books in electronic format for only $9.99. Twice this year I received an email from O’Reilly notifying me that Regular Expressions Cookbook was on sale. But each time the email was sent on the morning of the day itself. When it’s morning in California it’s already bedtime for me here in Thailand. So I never saw the emails until the next day, making it rather pointless to blog about the deal. But this time O’Reilly has listened to my request for advance notification. I just got an email this morning saying Regular Expressions Cookbook will be part of the Ebook Deal of the Day for 15 September 2010. That’s 15 September on the US west coast. When I write this there’s a few hours to go before the deal starts at one past midnight California time. You can get any O’Reilly Cookbook as an ebook for only $9.99. The normal price for Regular Expressons Cookbook as an ebook is $31.99. The download includes the book in PDF, ePub, Mobi (for Kindle), DAISY, and Android formats.

    Read the article

  • Fujitsu Raku-Raku SmartPhone: Japanese Digital Seniors UX Insight from @debralilley

    - by ultan o'broin
    Super blog posting on the super-important subject of digital inclusion by Oracle partner Fujitsu appstech maven and Oracle Applications User Experience FXA-er and ACE Director Debra Lilley (@debralilley). Debra tells us how Fujitsu is enabling digital inclusion for older mobile users in Japan with their  Raku-Raku (??????. ????)smart phone: Fujitsu Raku-Raku - My UX Homework (Raku-Raku means easy or comfortable in Japanese). There are UX mobile, social media, and methodology takeaways there for us in Debra's blog. Fujitsu Raku-Raku Smartphone Demo  I encourage you to read Debra's blog. In it, she makes reference to a tailored social media experience for those digital seniors (???????) as they'd be called in Japan (UK and Ireland uses the term silver surfers). You can find that online experience here. Online Community site for Fujitsu Raku-Raku Smartphone Digital Seniors (English translation via Google Translate) It's an important reminder that UX is global sure, but also that worldwide accessibility and digital inclusion are priorities too for UX. It's vital that we understand such aspects of technology adoption and how the requirements of different categories of technology users can be met. Oracle is committed to providing the best possible user experience for enterprise users of all ages and abilities. That means talking with all sorts of people worldwide and understanding how and why they want to use our technology and what their context of use is. You can read more about Oracle's accessibility program on our corporate website. Proud to say I prompted a few questions in Japan all the way from Ireland. So, UX is not only global but you can drive UX research globally too without ever leaving home. Brilliant job, Debra. Here's to more such joint research creativity and UX collaborations worldwide between us. Wondering where we might go next? And what a fun way to do things too!

    Read the article

  • How to prevent duplication of content on a page with too many filters?

    - by Vikas Gulati
    I have a webpage where a user can search for items based on around 6 filters. Currently I have the page implemented with one filter as the base filter (part of the url that would get indexed) and other filters in the form of hash urls (which won't get indexed). This way the duplication is less. Something like this example.com/filter1value-items#by-filter3-filter3value-filter2-filter2value Now as you may see, only one filter is within the reach of the search engine while the rest are hashed. This way I could have 6 pages. Now the problem is I expect users to use two filters as well at times while searching. As per my analysis using the Google Keyword Analyzer there are a fare bit of users that might use two filters in conjunction while searching. So how should I go about it? Having all the filters as part of the url would simply explode the number of pages and sticking to the current way wouldn't let me target those users. I was thinking of going with at max 2 base filters and rest as part of the hash url. But the only thing stopping me is that it would lead to duplication of content as per Google Webmaster Tool's suggestions on Url Structure.

    Read the article

  • How to explain why design choices are good?

    - by Telastyn
    As I've become a better developer, I find that much of my design skill comes more from intuition than mechanical analysis. This is great. It lets me read code and get a feel for it quicker. It lets me translate designs between languages and abstractions much easier. And it let's me get stuff done faster. The downside is that I find it harder to explain to teammates (and worse, management) why a particular design is advantageous; especially teammates that are behind the times on best practices. "This design is more testable!" or "You should favor composition over inheritance." go right over their heads, and lead into the rabbit hole of me trying to clue everyone in to the last decade of software engineering advances. I'll get better at it with practice of course, but in the mean time it involves a lot of wasted time and/or bad design (that will lead to wasted time fixing it later). How can I better explain why a certain design is superior, when the benefits aren't completely obvious to the audience?

    Read the article

  • What to choose for a multilingual site with support for Markdown and commenting

    - by Kent
    I want to publish articles at a multilingual site. I want to be able to write an article in two languages and have them available on separate URLs: thesite.foo/english-breakfast thesite.com/engelsk-frukost If the users web browser is set to English I'd like to show a small notice at the top of the Swedish version with a link to the English one. The link should have an appropriate rel attribute for a translation (search for hreflang at http://diveintohtml5.org/semantics.html). There should be a way to list all articles belonging to these sets: Swedish only, English only, Swedish versions + English only, English versions + Swedish only. I'd like to publish these as four RSS-feeds. And I would like to have two versions of the main site, one in Swedish (showing Swedish versions + English only) and one in English (showing English versions). I shall be able to write the articles using Markdown, as that is the formatting language I find most convenient. There should be a way for users to comment. And some kind of way for me to protect myself against comment spam. I am leaning towards learning Drupal. I suspect I'll have to code this behavior myself as a module. To be frank I'd rather work with Java. Is Drupal the way to go? Or is there something more suitable for this project?

    Read the article

  • Build & Install Ruby Gems with Rake

    - by kerry
    Are you using rake to build your gems?  Have you ever wished there were an install task to install it to your machine?  I, for one, have written something like this a few times: 1: desc 'Install the gem' 2: task :install do 3: exec 'gem install pkg/goodies-0.1.gem' 4: end 5:  That is pretty straightforward.  However, this will not work under JRuby on Mac where the command should be ‘jgem’.  So we can enhance it to detect the platform, and host OS: 1: desc 'Install the gem' 2: task :install do 3: executable = RUBY_PLATFORM[/java/] && Config::CONFIG[/darwin/] ? 'jgem' : 'gem' 4: exec "#{executable} install pkg/goodies-0.1.gem" 5: end This is a little better.  I am still not comfortable with the sloppiness of building a shell command and executing it though.  It is possible to do it with strictly Ruby.  I am also going namespace it to integrate better with the GemPackageTask.  Now it will be accessed via ‘rake gem:install’ 1: desc 'Install the gem' 2: namespace 'gem' do 3: task :install do 4: Gem::Installer.new('pkg/goodies-0.1.gem').install 5: end 6: end   I have included this in the goodies gem 0.2, so go ahead and install it!  ‘gem install goodies’

    Read the article

  • AJAX driven "page complete" function? Am I doing it right?

    - by Julian H. Lam
    This one might get me slaughtered, since I'm pretty sure it's bad coding practice, so here goes: I have an AJAX driven site which loads both content and javascript in one go using Mootools' Request.HTML. Since I have initialization scripts that need to be run to finish "setting up" the template, I include those in a function called pageComplete(), on every page Visiting one page to another causes the previous pageComplete() function to no longer apply, since a new one is defined. The javascript function that loads pages dynamically calls pageComplete() blindly when the AJAX call is completed and is loaded onto the page: function loadPage(page, params) { // page is a string, params is a javascript object if (pageRequest && pageRequest.isRunning) pageRequest.cancel(); pageRequest = new Request.HTML({ url: '<?=APPLICATION_LINK?>' + page, evalScripts: true, onSuccess: function(tree, elements, html) { // Empty previous content and insert new content $('content').empty(); $('content').innerHTML = html; pageComplete(); pageRequest = null; } }).send('params='+JSON.encode(params)); } So yes, if pageComplete() is not defined in one the pages, the old pageComplete() is called, which could potentially be disastrous, but as of now, every single page has pageComplete() defined, even if it is empty. Good idea, bad idea?

    Read the article

  • Read-only lock on a SharePoint site collection, or Why can't I edit anymore?

    - by PeterBrunone
    Monday morning, the calls started.  For some reason, long-time users were unable to edit list items.  I figured we had a permissions issue, so I popped in to look at the Site Settings -- and found that I couldn't.  A quick trip to Central Administration showed that I was still listed as a Site Collection Administrator, but I had no power at all on the site collection in question.A quick glance at the logs told me that the server had recently shut down unexpectedly (this is a Hyper-V virtual machine).  Apparently, in the confusion, somehow SharePoint decided to lock the site collection as Read Only.  This can be remedied in one of two ways:1)  In Central Administration, go to Application Management->SharePoint Site Management->Site collection quotas and locks.  Once you have arrived, select the correct application and site collection, and you will have the opportunity to view and set the lock status of the collection (it most likely will be set to "Read-only", and you'll want to move that radio button to "Not locked").2)  Fire up stsadm and issue the following command:stsadm -o setsitelock -url http://myportalsitecollection -lock none

    Read the article

  • A new mission statement for my school's algorithms class

    - by Eric Fode
    The teacher at Eastern Washington University that is now teaching the algorithms course is new to eastern and as a result the course has changed drastically mostly in the right direction. That being said I feel that the class could use a more specific, and industry oriented (since that is where most students will go, though suggestions for an academia oriented class are also welcome) direction, having only worked in industry for 2 years I would like the community's (a wider and much more collectively experienced and in the end plausibly more credible) opinion on the quality of this as a statement for the purpose an algorithms class, and if I am completely off target your suggestion for the purpose of a required Jr. level Algorithms class that is standalone (so no other classes focusing specifically on algorithms are required). The statement is as follows: The purpose of the algorithms class is to do three things: Primarily, to teach how to learn, do basic analysis, and implement a given algorithm found outside of the class. Secondly, to teach the student how to model a problem in their mind so that they can find a an existing algorithm or have a direction to start the development of a new algorithm. Third, to overview a variety of algorithms that exist and to deeply understand and analyze one algorithm in each of the basic algorithmic design strategies: Divide and Conquer, Reduce and Conquer, Transform and Conquer, Greedy, Brute Force, Iterative Improvement and Dynamic Programming. The Question in short is: do you agree with this statement of the purpose of an algorithms course, so that it would be useful in the real world, if not what would you suggest?

    Read the article

  • How to make pulseaudio and ubuntu detect the same audio device as alsa driver

    - by Kiwy
    I use Ubuntu 14.04 x64 and I use gnome-shell on my laptop. I have a Bose companion 5 (which is basically a USB sound system) and a HDMI port, both does work perfectly when I just boot with the cable plugin. However, when my laptop go to sleep or get unplugged from those two outputs, if I plug back the device, I end up without any hardware detection (only the built-in speakers) from pulse and gnome-shell sound output selector while if I use alsamixer, the device look up and ready. gstreamer-properties allow me to select and test effectively any device but while alsa recognize any device on the run, pulse is not capable of handling things correctly, my question is then: How can I make pulse detect and use the same hardware as alsa, or how to remove completely and gracefully pulseaudio (meaning volume applet running in gnome shell) I don't mind if the project implies to recompile half gnome shell if it implies those audio outputs work all the time. Pulse does not list my soundcard when I use command pactl list cards while the module plug&play for sound card is loaded in pactl list modules. I really don't know what to do, the behavior seems pretty random.

    Read the article

  • Did Oracle make public any plans to charge for JDK in the near future? [closed]

    - by Eduard Florinescu
    I recently read an article: Twelve Disaster Scenarios Which Could Damage the Technology Industry which mentioned among other the possible "disaster scenarios" also: Oracle starts charging for the JDK, giving the following as argument: Oracle could start requiring license fees for the JDK from everyone but desktop users who haven't uninstalled the Java plug-in for some reason. This would burn down half the Java server-side market, but allow Oracle to fully monetize its acquisitions and investments. [...] Oracle tends to destroy markets to create products it can fully monetize. Even if you're not a Java developer, this would have a ripple effect throughout the market. [...] I actually haven't figured out why Larry hasn't decided Java should go this route yet. Some version of this scenario is actually in my company's statement of risks. I know guessing for the future is impossible, and speculating about that would be endless so I will try to frame my question in an objective answarable way: Did Oracle or someone from Oracle under anonymity, make public, or hinted, leaked to the public such a possibility or the above is plain journalistic speculation? I am unable to find the answer myself with Google generating a lot of noise by searching JDK.

    Read the article

  • How is the Ubuntu installation supposed to work?

    - by Bob D
    I have given up on installing Ubuntu 12.04.01 for the sixth time. I finally got Windows XP to work again. So I blitzted the Ubuntu partition and the swap partition and was about to install the sixth try when it occurred to me that ought to ask how is is "supposed" to go. My installer will install Ubuntu on the Linus ext4 partition I created by hand in Windows on my C drive. But the installer keeps insisting on installing the OS on my D drive unless I intervene. So if I choose "do something else" it will accept installing Ubuntu on the C drive in the partition I previously created, but it insists on putting the "Device for boot loader installation" on the D drive. I can select a different drive at this point (where I could not with the "along side windows choice) but what drive to I choose??? It lists sda, sda1, sda5, sdb and sdb1. The five times before this all ended in disaster letting the installer choose. So I need human intervention. Where is the safe place to do this. The results from the previous attempts left me with only the Ubuntu that would boot, the boot to windows from the grub menu failed every time. Is there a better version of Ubuntu I can use? Is V12.04.01 messed up? My goal is still to use Wine on it to run PC programs. I would like to find a shell or skin or something that makes it seem like windows but have the security and power of Linux under the good. I have seen this type of system and it worked very well. I know I am getting long winded but I have been though at least four of the seven rings of hell already, so I want this install to be the last.

    Read the article

  • Which is more important in a web application code promotion hierarchy? production environment to repo equivalence or unidirectional propagation?

    - by ghbarratt
    Lets say you have a code promotion hierarchy consisting of several environments, (the polar end) two of which are development (dev) and production (prod). Lets say you also have a web application where important (but not developer controlled) files are created (and perhaps altered) in the production environment. Lets say that you (or someone above you) decided that the files which are controlled/created/altered/deleted in the production environment needed to go into the repository. Which of the following two sets of practice / approaches do you find best: Committing these non-developed file modifications made in the production environment so that the repository reflects the production environment as closely and as often as possible. Generally ignoring the non-developed production environment alterations, placing confidence in backups to restore the production environment should it be harmed, and keeping a resolution to avoid pushing developments through the promotion hierarchy in the reverse direction (avoiding pushing from prod to dev), only committing the files found in the production environment if they were absolutely necessary in other environments for development. So, 1 or 2, and why? PS - I am currently slightly biased toward maintaining production environment to repository equivalence (option 1), but I keep an open mind and would accept an answer supporting either.

    Read the article

  • Need help with exploring a USB drive

    - by Bob Getsla
    When I plug in a USB drive, I see it on the left hand edge of the Unity desktop (11.10, 64 bit) but when I try to explore it, VLC starts and tries to play whatever it can find in the USB drive. This behavior began when I updated from 11.04 to 11.10. I literally cannot look into the contents of any of the USB drives I have, because I cannot stop VLC, nor can I do anything when I click Open other than watch VLC start up. This is very frustrating because it makes my USB sticks essentially useless. HELP! I'm sure that there is something a wizard could do about this, but I am not a wizard, and I am at my wits ends. Trying to get to the System Settings menu works, and I can see the setup for "Removable" devices, and they are all set to "Ask" but that is clearly not what is happening. So it looks like I must reach for the command line, but where do I go to find the settings for what the desktop does when I plug in a USB drive and wish to explore the file structure in it and possible copy a file into the USB or from the USB drive. Right now, VLC media player is always getting in my way. :-(

    Read the article

  • Learn More About the Scalability, Uptime, and Agility of MySQL Cluster

    - by Antoinette O'Sullivan
    Learn more about the uncompromising scalability, uptime, and agility of MySQL Cluster by taking the authentic MySQL Cluster training course. During this three day class, you will learn how to properly configure and manage the cluster nodes to ensure high availability, how to install the different nodes as well as get a better understanding of the internals of the cluster. Events currently on the schedule for this class include:  Location  Date  Delivery Language  Wein, Austria  4 February 2013  German London, England  12 June 2013   English  Rennes, France 26 February 2013   French  Hamburg, Germany 21 January 2013   German  Munich, Germany  10 June 2013 German   Stuttgart, Germany  26 March 2013  German  Budapest, Hungary  19 June 2013  Hungarian  Milan, Italy  4 February 2013  Italy  Warsaw, Poland  18 March 2013  Polish  Barcelona, Spain  4 March 2013  Spanish  Madrid, Spain 25 February 2013   Spanish Chicago, United States  27 March 2013   English  Reston, United States  6 February 2013  English  Jakarta, Indonesia 21 January 2013  English   Singapore 18 February 2013   English To register for an event or to see further details on this or other courses in the authentic MySQL curriculum, please go to http://oracle.com/education/mysql.

    Read the article

  • After upgrading to 13.10, biblatex and biber are not compiling my references

    - by Lewelma
    I am working on a thesis using LaTeX, with my references relying on biblatex-apa. Ubuntu 13.04 provided all my LaTeX needs. But after upgrading to 13.10, the biblatex / biber combo will no longer compile my APA-style references. No other changes have been made to my documents or references -- and the rest of the document appears fine (albeit with broken references and no bibliography). I found reference to a possible cause -- which is that biblatex 1.7-1 is incompatible with texlive 2013 (as available through the 13.10 repositories) -- and that issue may be fixed by biblatex 2.7a-1 which has been committed upsteam in Debian. See: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=718244 However, that doesn't help me much, as I need to compile my references quite soon. How can I get my references to compile in the meantime? Is there a patched biblatex or biber that I can manually slot in place? Is the upstream fix on its way? or do I need to go to TexLive and do a replacement install directly (which is not my preference). Thanks!

    Read the article

  • Controlling access to site folders if you cannot user Roles

    - by DavidMadden
    I find myself on an assignment where I could not use System.Web.Security.Roles.  That meant that I could not use Visual Studio's Website | ASP.NET Configuration.  I had to go about things another way.  The clues were in these two websites:http://www.csharpaspnetarticles.com/2009/02/formsauthentication-ticket-roles-aspnet.htmlhttp://msdn.microsoft.com/en-us/library/b6x6shw7(v=VS.71).aspxhttp://msdn.microsoft.com/en-us/library/b6x6shw7(v=VS.71).aspxYou can set in your web.config the restrictions on folders without having to set the restrictions in multiple folders through their own web.config file.  In my main default.aspx file in my protected subfolder off my main site, I did the following code due to MultiFormAuthentication (MFA) providing the security to this point:        string role = string.Empty;         if (((Login)Session["Login"]).UserLevelID > 3)         {             role = "PowerUser";         }         else         {             role = "Newbie";         }         FormsAuthenticationTicket ticket =  new FormsAuthenticationTicket( 1,                 ((Login)Session["Login"]).UserID,                 DateTime.Now,                 DateTime.Now.AddMinutes(20),                 false,                 role,                 FormsAuthentication.FormsCookiePath);         string hashCookies = FormsAuthentication.Encrypt(ticket);         HttpCookie cookie =  new HttpCookie(FormsAuthentication.FormsCookieName, hashCookies);         Response.Cookies.Add(cookie); This all gave me the ability to change restrictions on folders without having to restart the website or having to do any hard coding.

    Read the article

  • Strategies for managUse of types in Python

    - by dave
    I'm a long time programmer in C# but have been coding in Python for the past year. One of the big hurdles for me was the lack of type definitions for variables and parameters. Whereas I totally get the idea of duck typing, I do find it frustrating that I can't tell the type of a variable just by looking at it. This is an issue when you look at someone else's code where they've used ambiguous names for method parameters (see edit below). In a few cases, I've added asserts to ensure parameters comply with an expected type but this goes against the whole duck typing thing. On some methods, I'll document the expected type of parameters (eg: list of user objects), but even this seems to go against the idea of just using an object and let the runtime deal with exceptions. What strategies do you use to avoid typing problems in Python? Edit: Example of the parameter naming issues: If our code base we have a task object (ORM object) and a task_obj object (higher level object that embeds a task). Needless to say, many methods accept a parameter named 'task'. The method might expect a task or a task_obj or some other construct such as a dictionary of task properties - it is not clear. It is them up to be to look at how that parameter is used in order to work out what the method expects.

    Read the article

  • How to integrate technical line/functional manager into Scrum team?

    - by thegreendroid
    We have recently had a new line manager start who is managing our Scrum team. He is immensely experienced in our field but is relatively inexperienced at Agile/Scrum. He has extensive technical expertise in embedded software (the team's domain) that would go to waste if not utilised properly. However, the team is wary of making a line manager part of the Scrum team. The general consensus is that the line manager should not be part of the Scrum team at all. There are a number of issues that may crop up, e.g. the team may start "reporting" to the manager (i.e. a daily status update!), the manager may start to micro-manage team members etc etc. As it currently stands, he has already said that he feels like an outsider within the team. We really want to make use of his technical skills, we'd be foolish if we didn't because we are a relatively inexperienced and young team of twenty somethings. What would be the best approach to integrate a senior "technical" line manager in a Scrum team and make him feel like he is part of the team?

    Read the article

  • 'Xojo' is the only application that I can't install

    - by Gichan
    I can't install xojo. When I click install in the software center it's not progressing. In the terminal it's stuck in : gichan02@gichan02-Latitude-D520:~$ sudo apt-get install xojo [sudo] password for gichan02: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: xojo-bin The following NEW packages will be installed: xojo xojo-bin 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 209 MB/209 MB of archives. After this operation, 596 MB of additional disk space will be used. Do you want to continue? [Y/n] Y 0% [Working] then after waiting for an hour for progress it says: Failed to fetch https://private-ppa.launchpad.net/commercial-ppa-uploaders/xojo/ubuntu/pool/main/x/xojo/xojo-bin_2013.41-0ubuntu1_i386.deb Could not resolve host: private-ppa.launchpad.net So I added apt repository for 'private-ppa': deb https://ging-giana:[email protected]/commercial-ppa-uploaders/xojo/ubuntu trusty main Then when I try 'apt-get update': GPG error: https://private-ppa.launchpad.net trusty Release: The following signatures were invalid: NODATA 2 Then I noticed something the Software Sources:Other software TAB: Added by software-center; credentials stored in /etc/apt/auth.conf https://private-ppa.launchpad.net/commercial-ppa-uploaders/xojo/ubuntu So i go to the '/etc/apt/auth.conf' ,but It cannot be opened and it is not a keyserver. So i uncheck: Added by software-center; credentials stored in /etc/apt/auth.conf https://private-ppa.launchpad.net/commercial-ppa-uploaders/xojo/ubuntu GPG error was gone. But then again I found myself at the beginning of the problem.STUCK at '0% [Working]'. 'Xojo' is the only application that I can't install.Any explanation why is it like that?

    Read the article

  • Is it possible to transfer a domain without a "gap" in Whois privacy protection?

    - by Guest
    I currently own several domains on which I am using a Whois privacy protection service to hide my personal details. In the near future, I would like to transfer some of these domains to a different registrar. It has been many years since I last performed domain transfers, so I am no longer knowledgeable about what it involves. However, I have read from several registrars that they ask their customers to disable Whois protection before effecting a domain transfer. Since there are several websites out there that publish archived versions of Whois information (and ask handsome money for the information to be hidden, of course), I would prefer to avoid having such a "gap" in my privacy protection. I figured that these websites would fetch Whois information mainly when a query is effected through their own website. However, I have found out that at least one of these sites had a copy of the Whois information for a new domain up on their site within hours after I registered it, so they must have some other source (of course I used a Google search to find that out, not their own site). What that tells me is that the time it takes for the domain transfers to go through would be more than enough for these rogue websites to cache my information. If my new registrar offers privacy protection for domains right from the point of registration as well, is there no way to transfer the domain between the two without reverting to my default Whois information in between?

    Read the article

  • How to provide value?

    - by Francisco Garcia
    Before I became a consultant all I cared about was becoming a highly skilled programmer. Now I believe that what my clients need is not a great hacker, coder, architect... or whatever. I am more and more convinced every day that there is something of greater value. Everywhere I go I discover practices where I used to roll my eyes in despair. I saw the software industry with pink glasses and laughed or cried at them depending on my mood. I was so convinced everything could be done better. Now I believe that what my clients desperately need is finding a balance between good engineering practices and desperate project execution. Although a great design can make a project cheap to maintain thought many years, usually it is more important to produce quick fast and cheap, just to see if the project can succeed. Before that, it does not really matters that much if the design is cheap to maintain, after that, it might be too late to improve things. They need people who get involved, who do some clandestine improvements into the project without their manager approval/consent/knowledge... because they are never given time for some tasks we all know are important. Not all good things can be done, some of them must come out of freewill, and some of them must be discussed in order to educate colleagues, managers, clients and ourselves. Now my big question is. What exactly are the skills and practices aside from great coding that can provide real value to the economical success of software projects? (and not the software architecture alone)

    Read the article

  • Site Web Analytics not updating Sharepoint 2010

    - by Rohit Gupta
    If you facing the issue that the web Analytics Reports in SharePoint 2010 Central Administration is not updating data. When you go to your site > site settings > Site Web Analytics reports or Site Collection Analytics reports  You get old data as in the ribbon displayed "Data Last Updated: 12/13/2010 2:00:20 AM" Please insure that the following things are covered: Insure that Usage and Data Health Data Collection service is configured correctly. Log Collection Schedule is configured correctly Microsoft Sharepoint Foundation Usage Data Import and Microsoft SharePoint Foundation Usage Data Processing Timer jobs are configured to run at regular intervals One last important Timer job is the Web Analytics Trigger Workflows Timer Job insure that this timer job is enabled and scheduled to run at regular intervals (for each site that you need analytics for). After you have insured that the web analytics service configuration is working fine and the Usage Data Import job is importing the *.usage files from the ULS LOGS folder into the WSS_Logging database, and that all the required timer jobs are running as expected… wait for a day for the report to get updated… the report gets updated automatically at 2:00 am in the morning… and i could not find a way to control the schedule for this report update job. So be sure to wait for a day before giving up :)

    Read the article

  • How can I set my resolution to 1280x1024 on an Acer Aspire Revo 3700?

    - by torbengb
    I've just set up a new nettop computer (Acer Aspire Revo 3700: CPU:Atom D525, GPU:Nvidia ION2). I've just made a clean install of Ubuntu 10.10 using the standard USB pendrive method. Almost everything works OK, but the graphics are not OK: the recommended Nvidia driver is activated but the monitor is not detected, so the resolution is wrong. How can I make Ubuntu detect my monitor? How can I get the proper resolution (1280x1024) in Ubuntu? I know that my monitor is not a CRT but an LCD: it's a BenQ, model T905, with 1280x1024 resolution at 60Hz, connected via a normal VGA cable. DVI or HDMI is not an option. When I go to SystemPrefsMonitors, I get: It appears that your graphics driver does not support the necessary extensions to use this tool. Do you want to use your graphics driver vendor's tool instead? YES NO If I say NO then I get a window: or for YES I get this: In both cases I don't see that I can fix this problem. The main reason for getting this new computer was that I was sick of having graphics problems on the old one with a very ugly solution that didn't give me hardware support - but at least I got the resultion. Why is this so difficult... sigh!

    Read the article

  • Import SSIS Project in Denali CTP1

    For years Analysis Services has had the ability to take an existing database from a server and reverse engineer it into a BIDS project.  This is extremely useful when all you have is the running instance of the database and the project that created it has long since disappeared.  Reverse engineering has never been a feature of SSIS until now. Let me walk you through the simple steps. The first step is that you obviously have to have a project deployed to an SSIS Catalog.  I will do a video on this soon but in case you can’t wait then my good buddy Jamie Thomson has written it up here As you can see I have a project called imaginatively “Denali1” with one package “Package.dtsx” The next thing we need to do is fire up BIDS and choose the right project type (Integration Services Import Project) Now we just follow the wizard.  We make sure we specify on which server to find the Catalog and in which folder to look for the project. Next the setting are validated and we are greeted with the familiar review screen before the creation of our new project from the deployed project happens Hit Import and away we go The result is just what we wanted.

    Read the article

< Previous Page | 726 727 728 729 730 731 732 733 734 735 736 737  | Next Page >