Search Results

Search found 13889 results on 556 pages for 'results'.

Page 274/556 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Php profiling on production server or other options

    - by absentx
    Alright I need some help here. I am commonly asked to speed up certain sections of some websites that I program for. I have yet to be able to figure out how to use a good php diagnosis/profiling tool. Some things to consider: The sites I am working on are already built, getting a testing server set up locally is just a huge pain..I have to rewrite include paths and just so many things. This is a results oriented deal and spending days to get a site fully working on a testing platform so I can debug one page probably isn't an option. I can write tons of php, but I have no clue how to interact or mess with servers. So every tutorial I read about setting up xdebug or xhprof all seem to involve getting something installed on a production server that I don't have access to or have no clue how to work with. So are there any solutions out there that will show me where my php is slow without having to do all sorts of server stuff that I just don't know how to do? Xhprof seems to be the closest to useable for me but from what I can tell it still has to be installed on a server. If anyone can just point me in the right direction on this I would be very grateful. Maybe getting these things put on the server isn't a big deal...but I have never interacted with server command lines or anything like that. I suppose I should start sometime but I really have no idea where to start. Plus I realize that profiling on a live platform is not the greatest idea either but I feel I am in a tough spot. I have speed issues to solve and setting up a local environment while a great idea, just doesn't seem real practical at the moment.

    Read the article

  • Cannot terminate proces: access is denied

    - by jao
    Skype and Spotify remain active after I close them. When I try to close them via Task Manager Details End Task, I get the following error: The operation could not be completed. Access is denied. So I have to reboot to get rid of these programs or to log in to Skype again. Also, running a CMD as administrator and executing taskkill /f /im skype.exe results in an Access is denied error. What is going on? (this is Windows 8 RTM x64) update I have to kill skype.exe because it crashed and when I restart skype, I get the following error: It says: Could not open Skype, you are already signed in on this computer. update 2 The process is owned by my own username.

    Read the article

  • In c-panel mail goes in spam instead of inbox in gmail

    - by Robin Jain
    I have c-panel vps server I have create a domain in the same server but when I sent a mail through webmail to gmail email id it goes into spam. Note--->Mail ip note blacklisted Spf records enable DKIM enable reverse dns are perfect ====================================================================== Email header Information: Delivered-To: [email protected] Received: by 10.143.93.13 with SMTP id v13csp119806wfl; Fri, 6 Jul 2012 08:01:36 -0700 (PDT) Received: by 10.182.52.42 with SMTP id q10mr26133912obo.46.1341586895571; Fri, 06 Jul 2012 08:01:35 -0700 (PDT) Return-Path: <[email protected]> Received: from lakshyacs-u.securehostdns.com ([50.97.147.134]) by mx.google.com with ESMTPS id fx3si18028369obc.144.2012.07.06.08.01.35 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 06 Jul 2012 08:01:35 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 50.97.147.134 as permitted sender) client-ip=50.97.147.134; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 50.97.147.134 as permitted sender) [email protected] Received: from localhost.localdomain ([127.0.0.1]:39016 helo=harishjoshico.com) by lakshyacs-u.securehostdns.com with esmtpa (Exim 4.77) (envelope-from <[email protected]>) id 1SnA2J-0006Nq-05 for [email protected]; Fri, 06 Jul 2012 20:31:35 +0530 Received: from 223.189.14.213 ([223.189.14.213]) (SquirrelMail authenticated user [email protected]) by harishjoshico.com with HTTP; Fri, 6 Jul 2012 20:31:35 +0530 Message-ID: <[email protected]> Date: Fri, 6 Jul 2012 20:31:35 +0530 Subject: ggglkhl From: [email protected] To: [email protected] User-Agent: SquirrelMail/1.4.22 MIME-Version: 1.0 Content-Type: text/plain;charset=iso-8859-1 Content-Transfer-Encoding: 8bit X-Priority: 3 (Normal) Importance: Normal X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - lakshyacs-u.securehostdns.com X-AntiAbuse: Original Domain - gmail.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - harishjoshico.com jhkhl ================================================================

    Read the article

  • Problems with Ubuntu and AMD A10-4655M APU

    - by Robert Hanks
    I have a new HP Sleekbook 6z with AMD A10-4655M APU. I tried installing Ubuntu with wubi--the first attempt ended up with a 'AMD unsupported hardware' watermark that I wasn't able to remove (the appeared when I tried to update the drivers as Ubuntu suggested) On the second attempted install Ubuntu installed (I stayed away from the suggested drivers) but the performance was extremely poor----as in Windows Vista poor. I am not sure what the solution is--if I need to wait until there is a kernel update with Ubuntu or if there are other solutions--I realise this is a new APU for the market. I would love to have Ubuntu 12.04 up and running--Windows 7 does very well with this new processor so Ubuntu should, well, be lightening speed. The trial on the Sleekbook with Ubuntu 12.10 Alpha 2 release was a complete failure. I created a bootable USB. By using either the 'Try Ubuntu' or 'Install Ubuntu' options resulted in the usual purple Ubuntu splash screen, followed by nothing...as in a black screen without any hint of life. Interestingly one can hear the Ubuntu intro sound. In case you are wondering, this same USB was trialed subsequently on another computer with and Intel Atom Processor. Worked flawlessly. Lastly the second trial on the Sleekbook resulted in the same results as the first paragraph. Perhaps 12.10 Beta will overcome this issue, or the finalised 12.10 release in October. I don't have the expertise to know what the cause of the behaviour is-the issue could be something else entirely. Sadly, the Windows 7 performance is very good with this processor-very similar and in some instances better to the 2nd generation Intel i5 based computer I use at my workplace. Whatever the cause is for the performance with Ubuntu 12.04 or 12.10 Alpha 2, the situation doesn't bode well for Ubuntu. Ubuntu aside, the HP Sleekbook is a good performer for the price. I am certain once the Ubuntu issue is worked on and solutions arise, the Ubuntu performance will probably be better than ever.

    Read the article

  • How likely can my data be recovered after Windows CHKDSK performed on a degraded RAID 5 array?

    - by chrisling106
    Hello there, We have a RAID 5 setup with 3 SATA disks, #2 went down as reported on the pre-POST screen. Unfortunately, for some reason out of my control, the system was rebooted with a degraded RAID :-O Windows XP (64-bit) loaded, CHKDSK ran automatically and done its recovery! From that point onwards, the following error prompts every time even in Safe Mode: lsass.exe - The endpoint format is invalid I took those 3 disks to the data recovery expert and need to wait at least 2-4 days for results. There are 2 VMs on multiple files stored in this RAID 5 array, and there's no backup! Sorry, I just inherited the system from an ex-staff who has left the company 2 months before I joined. How likely the data can be recovered?

    Read the article

  • Cannot establish Wired connection on Acer Aspire 5930G

    - by drakos46
    I use an Acer Aspire 5930G, dual booted with Windows 7 and Ubuntu 13.10. My problem is that i cannot establish wired connection on my ubuntu. Wireless connection works in both of my OSs but the Wired connection does not, only on the Ubuntu. I have already tried the common fix: echo on | sudo tee /sys/class/net/eth0/device/power/control but without any results. Any ideas? eth0 Link encap:Ethernet HWaddr 00:1d:72:c2:21:a5 inet6 addr: fe80::21d:72ff:fec2:21a5/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:945 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:204318 (204.3 KB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:7241 errors:0 dropped:0 overruns:0 frame:0 TX packets:7241 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:645874 (645.8 KB) TX bytes:645874 (645.8 KB) wlan0 Link encap:Ethernet HWaddr 00:16:ea:57:7b:cc inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::216:eaff:fe57:7bcc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:19937 errors:0 dropped:0 overruns:0 frame:0 TX packets:16933 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10877245 (10.8 MB) TX bytes:3131378 (3.1 MB) 02:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8071 PCI-E Gigabit Ethernet Controller (rev 16)

    Read the article

  • wmpnetwork.exe service clogs CPU usage

    - by Brenton Taylor
    We have remote locations each with 2 ASUS media extenders that stream from a computer with a shared Media Player library. Lately, several of these locations experience the "wmpnetwork.exe" service throttling the CPU to 100% usage. Killing the service only results in it starting back up, and so far the only temporary solution is to uninstall Media Player. A lot of these computers are also about 3-5 years old. Could it just be a case of outdated hardware not being able to do everything we ask them to do? Edit: all running Windows XP and Windows Media Player 11

    Read the article

  • Gentoo Linux -> Ubuntu: Can I Preserve My LVM/RAID Devices, Or Do I Need To Reformat?

    - by Eddie Parker
    Hello: I've got a Gentoo box that I'm interested in switching over to an Ubuntu box. I currently have the partitions laid out using a mixture of RAID (mdadm) and LVM2, as specified in this document [1]. Ideally I'd like to just wipe out the non /home partition, as it's got data I'd like to keep. Is it possible to reuse the current setup, or do I need to restart? vgdisplay, vgchange -a y, etc don't yield any results from the Ubuntu LiveCD, and I'm wary to run any commands that might wipe my data. Your help would be appreciated. [1] http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml

    Read the article

  • Decorator not calling the decorated instance - alternative design needed

    - by Daniel Hilgarth
    Assume I have a simple interface for translating text (sample code in C#): public interface ITranslationService { string GetTranslation(string key, CultureInfo targetLanguage); // some other methods... } A first simple implementation of this interface already exists and simply goes to the database for every method call. Assuming a UI that is being translated at start up this results in one database call per control. To improve this, I want to add the following behavior: As soon as a request for one language comes in, fetch all translations from this language and cache them. All translation requests are served from the cache. I thought about implementing this new behavior as a decorator, because all other methods of that interface implemented by the decorater would simple delegate to the decorated instance. However, the implementation of GetTranslation wouldn't use GetTranslation of the decorated instance at all to get all translations of a certain language. It would fire its own query against the database. This breaks the decorator pattern, because every functionality provided by the decorated instance is simply skipped. This becomes a real problem if there are other decorators involved. My understanding is that a Decorator should be additive. In this case however, the decorator is replacing the behavior of the decorated instance. I can't really think of a nice solution for this - how would you solve it? Everything is allowed, even a complete re-design of ITranslationService itself.

    Read the article

  • Make an object slide around an obstacle

    - by Isaiah
    I have path areas set up in a game I'm making for canvas/html5 and have got it working to keep the player within these areas. I have a function isOut(boundary, x, y) that returns true if the point is outside the boundary. What I do is check only the new position x/y separately with the corresponding old position x/y. Then if each one is out I assign them the past value from the frame before. The old positions are kept in a variable from a closure I made. like this: opos = [x,y];//old position npos = [x,y];//new position if(isOut(bound, npos[0], opos[1])){ npos[0] = opos[0]; //assign it the old x position } if(isOut(bound, opos[0], npos[1])){ npos[1] = opos[1]; //assign it the old y position } It looks nice and works good at certain angles, but if your boundary has diagonal regions it results in jittery motion. What's happening is the y pos exits the area while x doesn't and continues pushing the player to the side, once it has moved the player to the side a bit the player can move forward and then the y exits again and the whole process repeats. Anyone know how I may be able to achieve a smoother slide? I have access to the player's velocity vector, the angle, and the speed(when used with the angle). I can move the play with either angle/speed or x/yvelocities as I've built in backups to translate one to the other if either have been altered manually.

    Read the article

  • A new clients come into my web agency. How to configure email and social accounts to work better? [on hold]

    - by Marco Panichi
    I created websites for many years but still have not found the right way to organize all the email and social accounts of every clients. I mean, every web agency follows dozens of customers. Each client needs at least Google Analytics, AdWords, a Facebook page, a Twitter profile, a Youtube channel, probably a listing on Google Places and maybe a Mail Chimp (or similar) account. The web agency, in my opinion, must own these accounts, use them to deliver results to the customer and -of course- make them available to the customer for two reasons: - The customer must be able to see how things are going - The client must have the ability to change web agency without suffering The web agency, however, has many problems in having all of these accounts. For example, I like the idea of having a Gmail account for each client and from that account use all the products of Google. But is not possible to create more than many Gmail account from the same ip address and with the same phone number. The web agency could invite the customer to create his own accounts but: - This is not necessary a value for the customer (indeed...) - The web agency would manage them, however, from the same ip address, incurring in problems - If phone verification occurs, the web agency has to disturb the customer for verification Have you the same problem? How to solve it?

    Read the article

  • On Ubuntu installed NginX and Can't find where new WWW root is

    - by Nick Not
    On Ubuntu 13.04 the original WWW was in /var/www/ then I installed NginX and it installed correctly but I can't find the actual folder accessible by http (I looked in /etc/nginx/). I searched for index.htm, index.html and index.php but there are hundreds of results. Is there a command I can run to tell me what folder http is pointed to? I tried searching for this but I am not sure what keywords to use . Places I looked in: /usr/share/nginx/www /usr/share/www /usr/share/html /var/www /etc/nginx/

    Read the article

  • Forwarding ports with ssh on Linux

    - by Patrick Klingemann
    I have a database server, let's call it: dbserver I have a web server with access to my dbserver, let's call it: webserver I have a development machine that I'd like to use to access a database on dbserver, let's call it: dev dbserver has a firewall rule set to allow TCP requests from webserver to dbserver:1433 I'd like to set up a tunnel from dev:1433 to dbserver:1433, so all requests to 1433 on dev are passed along to dbserver:1433 My sshd_config on webserver has the following rules set: AllowTcpForwarding yes GatewayPorts yes This is what I've tried: ssh -v -L localhost:1433:dbserver:1433 webserver In another terminal: telnet localhost 1433 Results in: Trying ::1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. Any idea what I'm doing wrong here? Thanks in advance!

    Read the article

  • Search /usr/local/texlive directory with Spotlight

    - by Teake Nutma
    Having TeXLive installed on my Mac, I frequently need to consult documentation for some of the packages. It seems silly to Google this when I have the PDFs all on my HDD in /usr/local/texlive/2011/texmf-dist/doc , so I want to be able to use Spotlight to search for them. However, I can't get Spotlight to cooperate. I tried mdimport /usr/local/texlive/2011/texmf-dist/doc which then does some work, but afterwards doesn't display any results in Spotlight. I've also added the folder in Alfred's search scope to no avail. Any ideas?

    Read the article

  • Gnome3 shell video corruption with ATI Radeon HD 4850 on 11.10 Oneiric

    - by AndyAtTheWebists
    I have a problem similar to what's been mentioned in a few other questions, namely: Ati incompatible with Gnome-shell? Gnome Shell Glitched Top Bar Ubuntu 11.10 I have read a number of other posts here and on other forums and have tried a bunch of different solutions. The Problem The problem manifests itself only in Gnome3. I have tried KDE, Unity and KFCE and all are fine. The graphics corruption is visible only on gnome panels (see images below). Everything works fine with the free ATI drivers, but they just lack power. The problem occurs with the proprietary ones from AMD/ATI. I have installed version 11.11 and 12.1 as per the instructions on wiki.cchtml.com/index.php/Ubuntu_Oneiric_Installation_Guide. I have the same exact problem in both cases. I have tried this on clean installs of Ubuntu 11.10 Oneiric Ocelot (x64 and x86) and on Linux Mint 12 (x64) with the same results. Also, it looks like after some time of not using it, the PC just freezes. Maybe an overheat? Will look into it. Things I've Tried I have tried the following fixes: Different versions of drivers from ATI including the latest - this worked for some, not for me. Installing Oneiric specific package generated from driver, as well as the default install Removed the Unity Global menu Disabled file manager handling desktop Disabled Compiz "detect refresh rate" Sync to VBlank in Compiz on and off Please help! This is the first time in 10 years that I've finally had the chance to switch my primary desktop to Linux (stopped doing .NET dev work), and this is really getting me down. This is what the problem looks like: And:

    Read the article

  • VPN trace route

    - by Jake
    I am inside an Active Directory (AD) domain and trying to trace route to another AD domain at a remote site, but supposedly connected by VPN in between. the local domain can be accessed at 192.168.3.x and the remote location 192.168.2.x. When I do a tracert, I am suprised to see that the results did not show the intermediate ISP nodes. If I used the public IP of the remote location, then a normal tracert going through every intermediate node would show. 1 <1 ms <1 ms <1 ms 192.168.3.1 2 1 ms <1 ms <1 ms 192.168.3.254 3 7 ms 7 ms 7 ms 212.31.2xx.xx 4 197 ms 201 ms 196 ms 62.6.1.2xx 5 201 ms 201 ms 210 ms vacc27.norwich.vpn-acc.bt.net [62.6.192.87] 6 209 ms 209 ms 209 ms 81.146.xxx.xx 7 209 ms 209 ms 209 ms COMPANYDOMAIN [192.168.2.6] Can someone explain how does this VPN tunnelling works? Does this mean VPN is technically faster than without?

    Read the article

  • How to quickly switch monitors on multi-monitor setup? (W7)

    - by Saxtus
    At my desktop PC, I have 3 displays hooked up on my Palit GeForce GTX 480 to extend my desktop: DVI: Samsung SyncMaster T220 (22" 16:10, 1680x1050, as my main) DVI: Samsung SyncMaster 940BF (19" 4:3, 1280x1024) HDMI: Sony Bravia KDL-32D3000 (32" 16:9, 1360x768) As expected, the GFX card can only utilize 2 of the displays at any given time, so I have to manually switch using Windows' "Display Properties" dialog, each time I want to switch between them. I've tried UltraMon, DisplayFusion and PowerStrip (so please don't refer me to a question where those are given as solutions) but they are unable to switch monitors. Instead they can only change display resolutions and settings on the existing active monitors, with specially weird results when I've saved the monitor states and then tried to switch monitors using the saved states! Any idea on how to bypass the awful "Display Properties" dialog and it's "confirm your change" timer? Thanks in advance.

    Read the article

  • Power cut during Ubuntu upgrade to 10.04 - boots to command line, apt-get and dpkg do not work.

    - by Macha
    I was upgrading Ubuntu to 10.04, when a tripswitch tripped, cutting power to the computer. When it was restarted, it booted into a command line prompt. Google tells me to try: sudo dpkg --configure -a This gets me a lot of output that ends with a list of packages. I can't tell you what the output is, as piping the output to more/less does not work (still just all scrolls by and moves to next prompt), and redirecting it to a file just results in an empty file. Google also suggested: sudo apt-get install -f This also didn't work. Is a fresh install the only solution at this point?

    Read the article

  • Where to place the R code for R+Sweave+LaTeX workflow

    - by claytontstanley
    I spent the last week learning 3 new tools: R, Sweave, and LaTeX. One question that came to my mind though when working through my first project: Where do I place the majority of the R code? The tutorials that I read online placed the majority of the R code in the LaTeX .Rnw file. However, I find having a bunch of R calculations in the LaTeX file distracting. What I do find extremely helpful (of course) is to call out to R code in the LaTeX file and embed the result. So the workflow I've been using is to place 99% of my R code in my .R file. I run that file first, save a bunch of calculations as objects, and output the .Rout file once finished (to save the work). Then when running Sweave, I load up that .Rout file, so that I have the majority of my calculations already completed and in the Sweave R session. Then my LaTeX callouts to R are quite simple: Just give me the XTable stored in 'res.table', or give me the result of an already-computed calculation stored in the variable 'res'. So I push towards the minimal amount of R code in the LaTex file possible, to achieve the desired result (embedding stats results in the LaTeX writeup). Does anyone have any experience with this approach? I'm just worried I might run into trouble further down the line, when I start really trying to load up and leverage this workflow.

    Read the article

  • Configuring Nginx for Wordpress and Rails

    - by Michael Buckbee
    I'm trying to setup a single website (domain) that contains both a front end Wordpress installation and a single directory Ruby on Rails application. I can get either one to work successfully on their own, but can't sort out the configuration that would let me coexist. The following is my best attempt, but it results in all rails requests being picked up by the try_files block and redirected to "/". server { listen 80; server_name www.flickscanapp.com; root /var/www/flickscansite; index index.php; try_files $uri $uri/ /index.php; location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/flickscansite$fastcgi_script_name; } passenger_enabled on; passenger_base_uri /rails; } An example request of the Rails app would be http://www.flickscan.com/rails/movies/upc/025192395925

    Read the article

  • Starting out with OpenGL when most tutorials are out of date

    - by AUTO
    I'm sure there are already a bunch of questions like this asked, but the constant updating of the OpenGL library throws them all away, and in a month or two, the answers here will be worthless again. I am ready to start programming in OpenGL using C++. I've got a working compiler (DevCpp; do NOT ask me to switch to VC++, and don't ask me why). Now I'm just looking for a solid tutorial on how to program with OpenGL. My assistant found the tutorial provided by NeHe Productions, but as I've come to find out, it's WAY OUT OF DATE! (although I did pull together a basic window to support an OpenGL canvas) Then I went online, and found the OpenGL SuperBible, which apparently uses freeglut? But what I'd like to know is whether or not SuperBible 5th edition is up to date any longer. The suggestion to freeglut I found said the latest version was 2.6.0 but now it's 2.8.0! Is the OpenGL SuperBible still a good, and fairly up-to-date place to start? Is there a better place to go to learn OpenGL? Am I allowed to simply store freeglut in the DevCpp include directory (maybe in GL), or is there some important procedure? Are there any comments or suggestions that I didn't think to ask since I'm only just beginning? @dreta cleared some things up for me, so now I have a better idea of what to ask: I think I'd like to start out with OpenGL using a wrapper library instead of directly accessing OpenGL.I just think that, for a beginner, it would be easier for me to program and get good results, while I don't yet have to understand all the grimy details (as @stephelton mentioned). The problem is, I can't find any library that doesn't have undefined references to no longer supported functions. Freeglut sounds operational, but it still uses GLU.Does anyone know what I can do?Also, I tried compiling the first SuperBible's source, but I got errors since GLAPI is not being defined as a type, the error originating in the GLU library. I'd like to use the SuperBible, but I don't know how to fix this.

    Read the article

  • Simplest way to expose UNIX mailboxes via IMAP or POP3 on RHEL 5.6

    - by db2
    We've got a web server running RedHat Enterprise Linux 5.6, and it has all the usual local UNIX mailboxes. As is typical, the root mailbox gets all the cron output, logwatch results, etc. I'd like an easier way to keep an eye on this mailbox besides running mail via ssh. What should I install/enable to allow access to these system mailboxes via IMAP or POP3 with minimal fuss? Either protocol would be fine for what I'm doing, as I could then add it as an account in Outlook. A bit of searching led me to cyrus-imapd and dovecot, but it seems like they are meant for more serious mail hosting operations. Either they use their own mailbox system exclusively, or don't have a simple way of making the UNIX mailboxes available. If I'm wrong about that, then I'm fine with using either of them as long as I can get to the mailboxes of the existing accounts on the box.

    Read the article

  • An Interview with JavaOne Rock Star Martijn Verburg

    - by Janice J. Heiss
    An interview with JavaOne Rock Star Martijn Verburg, by yours truly, titled “Challenging the Diabolical Developer: A Conversation with JavaOne Rock Star Martijn Verburg,” is now up on otn/java. Verburg, one of the leading movers and shakers in the Java community, is well known for his ‘diabolical developer” talks at JavaOne where he uncovers some of the worst practices that Java developers are prone to. He mentions a few in the interview: * “A lack of communication: Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson.* No source control: Some developers simply store code in local file systems and e-mail the code in order to integrate their changes; yes, this still happens.* Design-driven design: Some developers are inclined to cram every design pattern from the Gang of Four (GoF) book into their projects. Of course, by that stage, they've actually forgotten why they're building the software in the first place.” He points to a couple of core assumptions and confusions that lead to trouble: “One is that developers think that the JVM is a magic box that will clean up their memory and make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try to force Java (the language) to do something it's not very good at, such as rapid Web development. So you get a proliferation of overly complex frameworks, libraries, and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!” Verburg has many insightful things to say about how to keep a Java User Group (JUG) going, about the “Adopt a JSR” program, bugathons, and much more. Check out the article here.

    Read the article

  • Why CFOs Should Care About Big Data

    - by jmorourke
    The topic of “big data” clearly has reached a tipping point in 2012.  With plenty of coverage over the past few years in the IT press, we are now starting to see the topic of “big data” covered in mainstream business press, including a cover story in the October 2012 issue of the Harvard Business Review.  To help customers understand the challenges of managing “big data” as well as the opportunities that can be created by leveraging “big data”, Oracle has recently run and published the results of a customer survey, as well as white papers and articles on this topic.  Most recently, we commissioned a white paper titled “Mastering Big Data: CFO Strategies to Transform Insight into Opportunity”. The premise here is that “big data” is not just a topic that CIOs should pay attention to, but one that CFOs should understand and take advantage of as well.  Clearly, whoever masters the art and science of big data will be positioned for competitive advantage in their industries or markets.  That’s why smart CFOs are taking control of big data and business analytics projects, not just to uncover new ways to drive growth in a slowing global economy, but also to be a catalyst for change in the enterprise.  With an increasing number of CFOs now responsible for overseeing IT investments and providing strategic insight to the board, CFOs will be increasingly called upon to take a leadership role in assessing the value of “big data” initiatives, building on their traditional skills in reporting and helping managers analyze data to support decision making. Here’s a link to the white paper referenced above, which is posted on the Oracle C-Central/CFO web site, as well as some other resources that can help CFOs master the topic of “big data”: White Paper “Mastering Big Data:  CFO Strategies to Transform Insight into Opportunity CFO Market Watch article:  “Does Big Data Affect the CFO?” Oracle Survey Report:  “From Overload to Impact – An Industry Scorecard on Big Data Industry Challenges” Upcoming Big Data Webcast with Andrew McAfee Here’s a general link to Oracle C-Central/CFO in case you want to start there: www.oracle.com/c-central/cfo Feel free to contact me if you have any questions or need additional information:  [email protected]

    Read the article

  • Can an internally developed fast evolving, agile, short sprint web application lend itself to offshoring?

    - by Gavin Howden
    I have recently been set a target to achieve readiness to successfully manage and deliver results through the usage of offshore teams on our mainline development project within 12 months. Our mainline is a multi-thousand user highly available web application, and various related SAAS components delivered through the above mentioned web application. We work agile on the mainline with a rapid 1 week sprint using continuous integration. Our delivery platform is a bespoke php framework, although we have some .net services and components in the mix. My view is: an offshore team could work if we either ship out an entire isolated project for offshore development, or we specify a component for our system in huge detail up front. But we don't currently work like that, and it will conflict with the in-house method, and unless the off-shore is working within our team, with our development/deployment chain it could be an integration nightmare. So my question is, given we have a closed source bespoke framework (Private IP) which we train our developers to use, and we work agile minimising documentation, maximising communication and responding to rapidly changing requirements, and much of the quality control is via team skills building and peer review, how can I make off-shoring work on our main line development?

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >