Search Results

Search found 5318 results on 213 pages for 'managed'.

Page 94/213 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Emacs: methods for debugging python

    - by Tom Willis
    I use emacs for all my code edit needs. Typically, I will use M-x compile to run my test runner which I would say gets me about 70% of what I need to do to keep the code on track however lately I've been wondering how it might be possible to use M-x pdb on occasions where it would be nice to hit a breakpoint and inspect things. In my googling I've found some things that suggest that this is useful/possible. However I have not managed to get it working in a way that I fully understand. I don't know if it's the combination of buildout + appengine that might be making it more difficult but when I try to do something like M-x pdb Run pdb (like this): /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ Where .../bin/python is the interpreter buildout makes with the path set for all the eggs. ~/bin/pdb is a simple script to call into pdb.main using the current python interpreter HellooKitty:hydrant twillis$ cat ~/bin/pdb #! /usr/bin/env python if __name__ == "__main__": import sys sys.version_info import pdb pdb.main() HellooKitty:hydrant twillis$ .../bin/devappserver is the dev_appserver script that the buildout recipe makes for gae project and .../parts/hydrant-app is the path to the app.yaml I am first presented with a prompt Current directory is /Users/twillis/bin/ C-c C-f Nothing happens but HellooKitty:hydrant twillis$ ps aux | grep pdb twillis 469 100.0 1.6 168488 67188 s002 Rs+ 1:03PM 0:52.19 /usr/local/bin/python2.5 /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ twillis 477 0.0 0.0 2435120 420 s000 R+ 1:05PM 0:00.00 grep pdb HellooKitty:hydrant twillis$ something is happening C-x [space] will report that a breakpoint has been set. But I can't manage to get get things going. It feels like I am missing something obvious here. Am I? So, is interactive debugging in emacs worthwhile? is interactive debugging a google appengine app possible? Any suggestions on how I might get this working?

    Read the article

  • The emergence of Atlassian's Bamboo (and a free SQL Source Control license offer!)

    - by David Atkinson
    The rise in demand for database continuous integration has forced me to skill-up in various new tools and technologies, particularly build servers. We have been using JetBrain's TeamCity here at Red Gate for a couple of years now, having replaced the ageing CruiseControl.NET, so it was a natural choice for us to use this for our database CI demos. Most of our early adopter customers have also transitioned away from CruiseControl, the majority to TeamCity and Microsoft's TeamBuild. However, more recently, for reasons we've yet to fully comprehend, we've observed a significant surge in the number of evaluators for Atlassian's Bamboo. I installed this a couple of weeks back to satisfy myself that it works seamlessly with Red Gate tools. As you would expect Bamboo's UI has the same clean feel found in any Atlassian tool (we use JIRA extensively here at Red Gate). In the coming weeks I will post a short step-by-step guide to setting up SQL Server continuous integration using the Red Gate command lines. To help us further optimize the integration between these tools I'd be very keen to hear from any Bamboo users who also use Red Gate tools who might be willing to participate in usability tests and other similar research in exchange for Amazon vouchers. If you are interested in helping out please contact me at David dot Atkinson at red-gate.com I recently spoke with Sarah, the product marketing manager for Bamboo, and we ended up having a detailed conversation about database CI, which has been meticulously documented in the form of a blog post on Atlassian's website: http://blogs.atlassian.com/2012/05/database-continuous-integration-redgate/ We've also managed to persuade Red Gate marketing to provide a great free-tool offer, provide a free SQL Source Control or SQL Connect license to Atlassian users provided it is claimed before the end of June! Full details are at the bottom of the post. Technorati Tags: sql server

    Read the article

  • Does the "security" repository provides anything not found in the "updates" repository?

    - by netvope
    For the limited number of package I looked at (e.g. apache), I found that the package version in the updates repository is always newer than or equal to the version available in the security repository (provided that they exist). This gives me the impression that all security patches posted to the security repository are also posted to the updates repository. If this is true, I can remove all <release_name>-security entries in my apt sources.list and the <release_name>-updates entries will still give me the security patches. This will speed up apt-get update quite a bit. The best documentation I can found regarding the repositories is on the community help page "Important Security Updates (raring-security)". Patches for security vulnerabilities in Ubuntu packages. They are managed by the Ubuntu Security Team and are designed to change the behavior of the package as little as possible -- in fact, the minimum required to resolve the security problem. As a result, they tend to be very low-risk to apply and all users are urged to apply security updates. "Recommended Updates (raring-updates)". Updates for serious bugs in Ubuntu packaging that do not affect the security of the system. However, it does not mention whether the updates repository also includes everything in the security repository. Can anyone confirm (or disconfirm) this?

    Read the article

  • How to install OpenCV without nVidia drivers

    - by Subhamoy Sengupta
    I have a laptop with on-board Intel graphics. I have been using OpenCV for years with this machine and I have managed to avoid manual compilation so far. But in Ubuntu 13.10, when I try to install libopencv-dev from the repositories, it brings along libopencv-ocl, which seems to be dependent on nvidia drivers. Letting the driver install messes up my xserver completely and when I do glxinfo afterwards, I get this: name of display: :0.0 Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". To solve this, I purge all nVidia drivers and reinstall xserver, much like it has been suggested here, and when I purge the nvidia drivers, OpenCV development libraries are also removed, as apt-get tells me they are no longer needed. This is foreign to me, because I expected a warning that I have installed packages that depend on this, but how can removing a dependency automatically remove the package I installed without warnings or asking? I understand it has something to do with nVidia being the provider of the libopencv-ocl in the repo. How could I get around it? I would rather not compile OpenCV if I can help it. I have seen similar questions, but not a suitable answer.

    Read the article

  • Chalk Talk, Glenn Block &ndash; Leith, Edinburgh 12th March 2011

    - by David Christiansen
    Exciting news. I am proud to announce that Glenn Block from Microsoft  will be coming all the way from Seattle to Scotland on the 12th March to talk to you!. Glenn is a PM on the WCF team working on Microsoft’s future HTTP and REST stack and has been involved in some pretty exciting and ground-breaking Microsoft development mind-shifts in recent times. Don’t miss the chance to hear him speak and ask him questions. Brief history of Glenn Prior to WCF he was a PM on the new Managed Extensibility Framework in .NET 4.0. Glenn has a breadth of experience both inside and outside Microsoft developing software solutions for ISVs and the enterprise. Glenn has also been very active in involving folks from the community in the development of software at Microsoft. This has included shipping several products under open source licenses, as well as assisting other teams looking to do so. Glenn is also a frequent speaker at local and international events and user groups.  When he's not working and playing with technology, he spends his time with his wife and daughter either at their home in Seattle or at one of the local coffee shops. Glenn Block on the web mvcConf 2 - Glenn Block: Take some REST with WCF (Feb 2011) @gblock on twitter My Technobabble - Glenn’s Blog Sponsored by Storm ID is an award winning full service digital agency in Edinburgh

    Read the article

  • Grub options are not visible on booting on Samsung ATIV Book 9 Lite running Ubuntu 14.04

    - by mjwittering
    I've managed to install Ubuntu 14.04 on my new Samsung ATIV Book 9 Lite ultrabook. After updating some configuratiosn in the UEFI installation was very easy. The only questions and issue I believe I'm still experience is when booting. I believe when the laptop would be displaying the grub boot options I see the following. There is a black screen with a purple border of 10px around the screen. I'd like to know how I can update my system so that I see the grub boot manager. I've run these commands: sudo cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" The command was not possible, sudo efibootmgr.

    Read the article

  • What causes bad performance in consumer apps?

    - by Crashworks
    My Comcast DVR takes at least three seconds to respond to every remote control keypress, making the simple task of watching television into a frustrating button-mashing experience. My iPhone takes at least fifteen seconds to display text messages and crashes ¼ of the times I try to bring up the iPad app; simply receiving and reading an email often takes well over a minute. Even the navcom in my car has mushy and unresponsive controls, often swallowing successive inputs if I make them less than a few seconds apart. These are all fixed-hardware end-consumer appliances for which usability should be paramount, and yet they all fail at basic responsiveness and latency. Their software is just too slow. What's behind this? Is it a technical problem, or a social one? Who or what is responsible? Is it because these were all written in managed, garbage-collected languages rather than native code? Is it the individual programmers who wrote the software for these devices? In all of these cases the app developers knew exactly what hardware platform they were targeting and what its capabilities were; did they not take it into account? Is it the guy who goes around repeating "optimization is the root of all evil," did he lead them astray? Was it a mentality of "oh it's just an additional 100ms" each time until all those milliseconds add up to minutes? Is it my fault, for having bought these products in the first place? This is a subjective question, with no single answer, but I'm often frustrated to see so many answers here saying "oh, don't worry about code speed, performance doesn't matter" when clearly at some point it does matter for the end-user who gets stuck with a slow, unresponsive, awful experience. So, at what point did things go wrong for these products? What can we as programmers do to avoid inflicting this pain on our own customers?

    Read the article

  • Out of space despite lots of free space remaining

    - by Kristian Thomsen
    When upgrading Ubuntu from 11.10 to 12.04 I discovered an unexpected problem. The upgrade was stopped because there wasn't enough free space for the installation. I managed to free some space and do the upgrade but now a prompt appears after logging in saying I'm out of space. This prompt asks me if I want to examine the problem. The "Disk Usage Analyser" is opened. In the top it says: Total filesystem capacity: 47.0 GB (used: 13.5 GB available: 33.4 GB) Folder -- Usage -- Size / -- 100% -- 12.5 GB usr -- 44.8 % -- 5.6 GB home -- 30.3 % -- 3.8 GB lib -- 13.0 % -- 1.6 GB var -- 9.1 % -- 1.1 GB boot 2.5 % 309.5 GB and a lot of small contributors like: etc, opt, sbin, bin etc. I do not really understand this problem since the analyser in the top says that I have 33.4 GB left in this file system. What can I do to make Ubuntu use the remaining space? Running df -i in the terminal gives: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda7 610800 576874 33926 95% / udev 213451 563 212888 1% /dev tmpfs 218524 486 218038 1% /run none 218524 3 218521 1% /run/lock none 218524 7 218517 1% /run/shm /dev/sda8 2264752 16371 2248381 1% /home What does this mean?

    Read the article

  • Asus Eee PC 1000HE wireless woes

    - by Vladimir Noobokov
    Ever since I have upgraded my Asus Eee PC 1000HE from Lucid 10.04 to Precise 12.04 I have been having issues with my wireless connections. At first I had wireless dropouts: I would be able to start using wireless, but then after a few minutes the wireless would stop working even though I was still connected to the network. Lately things turned worse: while I connect to my wireless network, it just never works. I tried all sorts of solutions on offer here and in other forums but none worked. At best I got the wireless to work up until I rebooted, at which point I would get the same symptoms again: the wireless network is there, but it's not really working. By now I tried so many different "solutions" I don't know where to start describing them; I have also reinstalled 12.04 several times, enough to make me lose faith in Ubuntu. Help here looks like my last resort. For the record, my Asus Eee PC 1000HE is equipped with an Atheros wireless card. I have reinstalled 12.04, ran all the suggested updates, and receive the following response when I type iwconfig in the terminal: lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Arsenal" Mode:Managed Frequency:2.452 GHz Access Point: 00:04:ED:48:67:89 Bit Rate=1 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-29 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:27 Invalid misc:57 Missed beacon:0 eth0 no wireless extensions. Thanks in advance for any help that might be offered.

    Read the article

  • Movember 2012

    - by Tim Koekkoek
    If you were lucky enough to visit one of the Oracle Dublin offices during the month November you may have noticed a bunch of mustached merchants. If you thought the mustache was the newest hair fashion in Ireland you were wrong. These guys were the Mo Bro’s and proud members of MOracle, our Movember 2012 team. The aim of Movember is to raise vital funds and awareness for men’s health, especially prostate cancer. To raise these funds, men don't shave their upper lips for a whole month and get sponsored for it by friends, family and colleagues. To highlight the importance of supporting this cause, take a look at these statistics: •             1 out of 8 men will be diagnosed with prostate cancer during their life. •             This year more than 2,000 new cases of disease will be diagnosed. •             1 out of 3 men will be diagnosed with cancer during their life. It was a long and heavy month for all the Mo Bro’s, but in the end the effort has paid off. Under the leadership of team captain Jimmy this team managed to raise over €4,400  and was ranked #34 out of 1142 Irish Movember teams. The team couldn't have done it without the constant support of our colleagues and sponsors. Many thanks to all of you! We are very happy to have raised money and awareness for men’s health. On top of that we are also happy to have raised awareness for the most underrated and abandoned piece of man’s hair… the mustache. This is just the beginning; soon many men will proudly wear this fashionable look again!

    Read the article

  • Partner Webcast - Oracle SOA Suite 12c: Connect 4 Cloud, Mobile, IoT with On-premise

    - by Thanos Terentes Printzios
    The pace of new business projects continues to grow from increasing customer self-service to seamlessly connecting all your back office and in-the-field applications. At the same time increased integration complexity may seem inevitable as organizations are suddenly faced with the requirement to support three new integration challenges:  » Cloud Integration - integrate with the cloud, rapidly integrate a growing list of cloud applications with existing applications » Mobile Integration - the urgency to mobile-enable existing applications » IoT Integration - begin development on the latest trend of connecting Internet of Things (IoT) devices to your existing infrastructure. Oracle SOA Suite 12c, the latest version of the industry’s most complete and unified application integration and SOA solution, aims to simplify, accelerate and optimize integrations. Oracle SOA Suite 12c and its associated products, Oracle Managed File Transfer, Oracle Cloud and Application Adapters, B2B and healthcare integration, offer the industry’s most highly integrated platform for solving the increased integration challenges. Oracle SOA Suite 12c is a complete, integrated and best-of-breed platform. It enables next generation integration capabilities through: · A unified toolset for the development of services and composite applications.· A standards-based platform that is service enabled and easily consumable by modern web applications, allowing enterprises to quickly and easily adapt to changes in their business and IT environments.· Greater visibility, controls and analytics to govern how services and processes are deployed, reused and changed across their entire lifecycle. Join us to find out more about the new features of Oracle SOA Suite 12c and how it enables you to reduce time to market for new project integration and to reduce integration cost and complexity. Oracle SOA Suite is the ability to simplify by integrating the disparate requirements of cloud, mobile, and IoT devices with existing on-premise applications. Agenda: Oracle SOA Suite 12c new Features Cloud Integration Mobile Enablement Internet of Things (IoT) Summary - Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Presenter: Heba Fouad – FMW Specialist, Technology Adoption, ECEMEA Partner Business Development Date: Thursday, August 28th, 10pm CEST (8am UTC/11am EEST)Duration: 1 hour Register Here For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com

    Read the article

  • The emergence of Atlassian's Bamboo (and a free SQL Source Control license offer!)

    - by David Atkinson
    The rise in demand for database continuous integration has forced me to skill-up in various new tools and technologies, particularly build servers. We have been using JetBrain's TeamCity here at Red Gate for a couple of years now, having replaced the ageing CruiseControl.NET, so it was a natural choice for us to use this for our database CI demos. Most of our early adopter customers have also transitioned away from CruiseControl, the majority to TeamCity and Microsoft's TeamBuild. However, more recently, for reasons we've yet to fully comprehend, we've observed a significant surge in the number of evaluators for Atlassian's Bamboo. I installed this a couple of weeks back to satisfy myself that it works seamlessly with Red Gate tools. As you would expect Bamboo's UI has the same clean feel found in any Atlassian tool (we use JIRA extensively here at Red Gate). In the coming weeks I will post a short step-by-step guide to setting up SQL Server continuous integration using the Red Gate command lines. To help us further optimize the integration between these tools I'd be very keen to hear from any Bamboo users who also use Red Gate tools who might be willing to participate in usability tests and other similar research in exchange for Amazon vouchers. If you are interested in helping out please contact me at David dot Atkinson at red-gate.com I recently spoke with Sarah, the product marketing manager for Bamboo, and we ended up having a detailed conversation about database CI, which has been meticulously documented in the form of a blog post on Atlassian's website: http://blogs.atlassian.com/2012/05/database-continuous-integration-redgate/ We've also managed to persuade Red Gate marketing to provide a great free-tool offer, provide a free SQL Source Control or SQL Connect license to Atlassian users provided it is claimed before the end of June! Full details are at the bottom of the post. Technorati Tags: sql server

    Read the article

  • Boot-repair commands not found in PATH or not executable

    - by Bram Meerten
    I recently had problems with my ubuntu partition (after the battery died), I managed to fix them by running ubuntu from usb and run gparted. It worked I can access my files on the partition by running ubuntu from usb. But when I restart the computer, after selecting ubuntu in Grub, I get a black screen with a white underscore. I googled the problem, and tried to solve it by setting nomodeset, but it didn't work. Next I wanted to try to fix Grub using boot-repair, I clicked on 'Recommended repair', it tells me to type the following commands in the terminal: sudo chroot "/mnt/boot-sav/sda5" apt-get install -fy sudo chroot "/mnt/boot-sav/sda5" dpkg --configure -a sudo chroot "/mnt/boot-sav/sda5" apt-get purge -y --force-yes grub-common But when running the second command, I get this error: dpkg: warning: 'sh' not found in PATH or not executable. dpkg: warning: 'rm' not found in PATH or not executable. dpkg: warning: 'tar' not found in PATH or not executable. dpkg: error: 3 expected programs not found in PATH or not executable. Note: root's PATH should usually contain /usr/local/sbin, /usr/sbin and /sbin. I didn't edit /etc/environment (or any other files), this is what it looks like: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" RUNNING_UNDER_GDM="yes" I have no idea how to fix this. I'm running dualboot Ubuntu 12.04 and Windows 7, Windows boots fine.

    Read the article

  • DNS and VPN issues

    - by Lewis
    I recently purchased a year contract for a KVM 512MB VPS running Ubuntu 11.04. I'm having some issues setting up some things on it though - two in particular that I just can't for the life of me figure out. Okay, so I'm trying to setup pptpd as my VPN for my iPhone and my Mac when I'm out on wireless networks. I'm able to login and the chap authenticates but that's as far as I get, no domains will resolve and end up loading forever, I uncommented ms-dns lines as someone had recommended to me and changed the DNS servers to Googles public ones with no luck, is there something I'm missing? (It's probably staring me in the face.) My second issue is that I have managed to setup LAMP but am having a problem with my domain, I have pointed the DNS at 123-reg to my VPS's IP and the 'www .' resolves properly, but when I try to go to the domain without the 'www .' I get the apache landing page ("The web server software is running but no content has been added, yet.") I'm pretty sure there's something I've gotta configure in Apache for the virtual host but I'm missing it. Apart from these minor set-backs I'm enjoying the low-level configuration options of having a VPS and love managing my own server. Thanks!

    Read the article

  • Running & Managing Concurrent Queries in SQL Developer

    - by thatjeffsmith
    We’ve all been there – you’ve managed to write a query that takes longer than a few seconds to execute. Tuning aside, sometimes it takes longer than you want for a query to run. So what’s a SQL Developer user to do? I say, keep going! While you’re waiting for your query to finish, there’s no reason why you can’t continue on with your work. If you need to execute something else in a worksheet, there’s no reason to launch a 2nd or 3rd copy of SQL Developer. Just open an un-shared worksheet. Now while you’ve got 1 or more queries running, you can easily get yourself into a situation where you’re not sure what’s running where. Or maybe you want to cancel a query or just check how long something’s been running. Just open the Task Progress Panel If a query or task in SQL Developer takes more than 3-5 seconds, it will appear in the Task Progress panel. You can then watch the throbbers go back and forth while you sip your coffee/soda/Red Bull. Run a query, spawn a new worksheet, run another query, watch them in the Task Progress panel. Kudos and thanks to @leight0nn for helping me get the title of this post right If you’re looking for help in managing and monitoring sessions in general, check out this post.

    Read the article

  • Why use try … finally without a catch clause?

    - by Nick Rosencrantz
    The classical way to program is with try / catch but when is it appropriate to use try without catch? In Python the following appears legal and can make sense: try: #do work finally: #do something unconditional But we didn't catch anything. Similarly one could think in Java it would be try { //for example try to get a database connection } finally { //closeConnection(connection) } It looks good and suddenly I don't have to worry about exception types etc. But if this is good practice, when is it good practice? Or reasons why this is not good practice or not legal (I didn't compile the source I'm asking about and it could be a syntax error for Java but I checked that the Python surely compiles.) A related problem I've run into is that I continue writing the function / method and at the end I must return something and I'm in a place which should not be reached and it must be a return point so even if I handle the exceptions above I'm still returning null or an empty string at some point in the code which should not be reached, often the end of the method / function. I've always managed to restructure to code so that I don't have to return null since that absolutely appears to look like less than good practice.

    Read the article

  • Mount Ubuntu shares remotely with Mac and Windows

    - by Donald
    First time on here so please be gentle! I have setup a small school network with a Ubuntu 12.04 Server for use as a fileserver mainly. I have managed to set the server up (all command line based - no GUI) and setup the Samba shares, which works really well internally. Internally, the school have a combination of Mac's and Windows machines and they can all access the shares happily. The school has a fixed IP ADSL connection and I have added a route in the router to allow me remote access to the server using SSH (port 22). That also works well. All good so far! What I now want to do is allow remote access to the shares. I have done a bit of reading and thought I had found the answer with SSHFS but I am still non-the-wiser. So, my basic questions is: In Windows, how can I map to the Ubuntu shares across the internet through the router? In Mac OS, how can I add the remote share across the internet? The school used to have a Windows server and the users were used to creating a VPN and then pulling up the share folders etc, but I'm unsure how to do this with the Ubuntu server. I assume I need to add another route through the router too allow for SSHFS or something similar? Thanks in advance... Donald

    Read the article

  • Log php errors in ubuntu

    - by resting
    I followed the setup here: Where is the PHP error log When I look into /var/log/php_errors.log, I could see some PHP errors. PHP Warning: file_get_contents(/var/www/...): failed to open stream: No such file or directory in ... But what I'm trying to see is the error when I removed a semicolon from a statement. That error above has no relation to file from where I removed the semicolon so we can just ignore that. When I access the page with the removed semicolon, I get The website encountered an error while retrieving https://myapp/download/decode/testfile. It may be down for maintenance or configured incorrectly. HTTP Error 500 (Internal Server Error): An unexpected condition was encountered while the server was attempting to fulfill the request. But no logs in /var/log/php_errors.log. How do I see the error that usually says which line and which file the process failed? The real reason for trying to see the error is because I have a very huge loop, that throws the HTTP 500 error and I can't see the exact error. I'm just simulation with a removed semicolon to test things out. Other settings: error_reporting = E_ALL & ~E_DEPRECATED display_errors = On On Ubuntu 10.04.4 LTS Update Ok, I managed to get the error message to display. Parse error: syntax error, unexpected T_IF in ... However, it's still not logged. It wasn't displaying previously because Cakephp's debug level was at 0. Setting it to 2 displays the message, but no logs.

    Read the article

  • Changing the connection factory JNDI dynamically in Ftp Adapter

    - by [email protected]
    Consider a usecase where you need to send the same file over to five different ftp servers. The first thought that might come to mind is to create five FtpAdapter references one for each connection-factory location. However, this is not the most optimal approach and this is exactly where "Dynamic Partner Links" come into play in 11g.    If you're running the adapter in managed mode, it would require you to configure the connection factory JNDI in the appserver console for the FtpAdapter. In the sample below, I have mapped the connection-factory JNDI location "eis/Ftp/FtpAdapter" with the ftp server running on localhost.           After you've configured the connection factory on your appserver, you will need to refer to the connection-factory JNDI in the jca artifact of your SCA process. In the example below, I've instructed the FTPOut reference to use the ftp server corresponding to "eis/Ftp/FtpAdapter".     The good news is that you can change this connection-factory location dynamically using jca header properties in both BPEL as well as Mediator service engines. In order to do so, the business scenario involving BPEL or Mediator would be required to use a reserved jca header property "jca.jndi" as shown below.     Similarly, for mediator, the mplan would look as shown below.       Things to remember while using dynamic partner links: 1) The connection factories must be pre-configured on the SOA server. In our BPEL example above, both "eis/Ftp/FtpAdater1" and "eis/Ftp/FtpAdater2" must be configured in the weblogic deployment descriptor for the FtpAdapter prior to deploying the scenario. 2) Dynamic Partner Links are applicable to outbound invocations only.    

    Read the article

  • Order of operations to render VBO to FBO texture and then rendering FBO texture full quad

    - by cyberdemon
    I've just started using OpenGL with C# via the OpenTK library. I've managed to successfully render my game world using VBOs. I now want to create a pixellated affect by rendering the frame to an offscreen FBO with a size half of my GameWindow size and then render that FBO to a full screen quad. I've been looking at the OpenTK example here: http://www.opentk.com/doc/graphics/frame-buffer-objects ...but the result is a black form. I'm not sure which parts of the example code belongs in the OnLoad event and OnRenderFrame. Can someone please tell me if the below code shows the correct order of operations? OnLoad { // VBO. // DataArrayBuffer GenBuffers/BindBuffer/BufferData // ElementArrayBuffer GenBuffers/BindBuffer/BufferData // ColourArrayBuffer GenBuffers/BindBuffer/BufferData // FBO. // ColourTexture GenTextures/BindTexture/TexParameterx4/TexImage2D // Create FBO. // Textures Ext.GenFramebuffers/Ext.BindFramebuffer/Ext.FramebufferTexture2D/Ext.FramebufferRenderbuffer } OnRenderFrame { // Use FBO buffer. Ext.BindFramebuffer(FBO) GL.Clear // Set viewport to FBO dimensions. GL.DrawBuffer((DrawBufferMode)FramebufferAttachment.ColorAttachment0Ext) // Bind VBO arrays. GL.BindBuffer(ColourArrayBuffer) GL.ColorPointer GL.EnableClientState(ColorArray) GL.BindBuffer(DataArrayBuffer) // If world changed GL.BufferData(DataArrayBuffer) GL.VertexPointer GL.EnableClientState(VertexArray) GL.BindBuffer(ElementArrayBuffer) // Render VBO. GL.DrawElements // Bind visible buffer. GL.Ext.BindFramebuffer(0) GL.DrawBuffer(Back) GL.Clear // Set camera to view texture. GL.BindTexture(ColourTexture) // Render FBO texture GL.Begin(Quads) // Draw texture on quad // TexCoord2/Vertex2 GL.End SwapBuffers }

    Read the article

  • Microsoft Sync Framework

    - by kaleidoscope
    Introduction It is a platform that enables collaboration and offline access for applications, services and devices. Sync framework features technologies and tools that enable roaming, data sharing and taking data offline. Moreover, developers can build synchronization ecosystems that integrate any application with data from any store, by using any protocol over any network. Highlights * Add sync support to new and existing applications, services and devices * Enable collaboration and offline capabilities for any application * Roam and share information form any data store, over any protocol and over any network configuration * Leverage sync capabilities exposed in Microsoft technologies to create sync ecosystems * Extend the architecture to support custom data types including files Benefits of using Sync Framework * An extensible model that lets you integrate multiple data sources into a synchronization ecosystems. * A managed API for all components and a native API for select components. * Conflict handling for automatic and custom resolution schemes. * Filters that let you synchronize a subset of data, such as only those files that contain images. * A compact and efficient metadata model that enables synchronization for virtually any participant, without significant changes to the data store: - Any data store     Add synchronization to a wide range of applications, services and devices. - Any data type     Introduce new data types to synchronize. - Any protocol     Use existing architectures and protocols to synchronize data. The transport – agnostic architecture allows integration of synchronization into a variety of protocols, including over-the-air and embedded devices. - Any network configuration     Enable synchronization for your applications, devices and services in true peer-to-peer or hub-and-spoke configurations. Easily recover from network interruptions. Reduce network traffic by efficiently selecting changes to synchronize. Technorati Tags: Anish Sharma,Microsoft Sync Framework

    Read the article

  • How do I create a .deb file?

    - by JamesTheAwesomeDude
    Yes, I know that this question has been asked many times before, but none of the answers really helped. I'd like to package the Minecraft launcher (which has no proprietary code, AFAIK,) into a .deb file so that I can put it on a flash drive and share it with my friends. I have managed to install Minecraft it manually (put some files into /opt/minecraft, download an icon, and create a .desktop file in /usr/share/applications,) and I have made a shell script that completely automates the process, but it relies on wget to retrieve a few files, including the .desktop file. (It isn't a self-extracting archive, after all.) I'd like to be able to do this offline, as a lot of my friends have slow or no internet. (One of their internet lines was buried so shallowly that it actually got knocked out by the lawnmower.) I won't be loading it into a PPA or anything like that; I just want it to be a "formal" package that can be easily installed and uninstalled. (One thing that I would like is for sudo apt-get purge minecraft to also remove the .minecraft folder. It would also be nice to define the dependedcies as being able to accept OpenJDK or Sun's JVM.) Oh, just so you know, the Minecraft launcher is a .jar file, but I can very, very easily launch it via shell scripts. The exact command is right on the download page.

    Read the article

  • DopeWarz 2010

    - by Theo Moore
    A few (6?) weeks ago, I started a project that I've always wanted to do. I am doing a re-write of the old VB6 game DopeWars with a partner in crime. I loved that game and spent many, many hours wasting time playing it years ago. I liked it so much, I even registered it (it was $5...even then, that wasn't much to spend). The VB6 version itself was a port of the old DOS game DrugWars (never go to play that). I needed a game project to work on as it's been far too long since I've done any game work. There is surprising amount of logic built into the game (there is the way I'm writing it, anyway) with what is really a minimal interface. My design goal was to have an object model that could be easily adapted to any interface and so far, I've managed to do that. I am even considering a web-based version that could run via Facebook (no clue how to do that...yet). I've enlisted the help of one of my DBA buddies to work on the interface while I am working on the object model. So far, this arrangement has gone well. The logical separation of concerns allows for us to collaborate easily. Once we get to the Facebook step, it will be great to have a DBA "on staff" to help that part off of the ground. The object model is probably about 60% complete with quite a bit of testing to go. More on this as we go....

    Read the article

  • Scaling Scrum within a group of 100s of programmers

    - by blunders
    Most Scrum teams lean toward 7-15 people **, though it's not clear how to scale Scrum among 100s of people, or how the effectiveness of a given team might be compared to another team within the group; meaning beyond just breaking the group into Scrum teams of 7-15 people, it's unclear how efforts between the teams are managed, compared, etc. Any suggestions related to either of these topics, or additional related topics that might be of more importance to account for in planning a large scale SCRUM grouping? ** In reviewing research related to the suggested size of software development teams, which appears to be the basis for the suggested Scrum team size, I found what appears to be an error in the research which oddly appears to show that bigger teams (15+ ppl), not smaller teams (7 ppl) are better. UPDATE, "Re: Scrum doesn't scale": Made huge amounts of progress on personally researching the topic, but thought I'd respond to the general belief of some that Scrum doesn't scale by citing a quote from Succeeding with Agile by Mike Cohn : Scrum Does Scale: You have to admire the intellectual honesty of the earliest agile authors. They were all very careful to say that agile methodolgies like Scrum were for small projects. This conservatism wasn’t because agile or Scrum turned out to be unsuited for large projects but because they hadn’t used these processes on large projects and so were reluctant to advise their readers to do so. But, in the years since the Agile Manifesto and the books that came shortly before and after it, we have learned that the principles and practices of agile development can be scaled up and applied on large projects, albeit it with a considerable amount of overhead. Fortunately, if large organizations use the techniques described regarding the role of the product owner, working with a shared product backlog, being mindful of dependencies, coordinating work among teams, and cultivating communities of practice, they can successfully scale a Scrum project. SOURCE: (ran across the book thanks to Ladislav Mrnka answer)

    Read the article

  • Approaches to timed puzzle elements

    - by ndg
    I'm working on a side scrolling game that has a number of timed puzzle elements. As a simple example: I have a number of moving platforms that have been setup to transition in a pattern. Ideally I'd like to ensure that as the player first approaches them, they are in an ideal state -- whereby the player can witness the full transition and more experienced players (i.e: speedrunners) can complete the puzzle immediately without having to wait for the current transition to complete. The issue here, in a nutshell, is that because these platforms begin transitioning at the start of the level, it's impossible to correctly calculate when the player is likely to stumble upon them. I've done a fair bit of Googling but haven't managed to turn up any decent resources with regards to solving a problem like this. The obvious solution is to only begin updating the objects when the player (or more likely: the camera) first encounters them. But this becomes difficult when you consider more complicated situations. It seems like potentially the easiest way of handling this is to have an invisible trigger volume that will tell any puzzle elements located inside of it that the player has 'arrived' upon first colliding with the player. But this would mean I'd have to logically group puzzle elements, which could become fairly messy in a hurry. Take, for instance, a puzzle that appears to the right of the screen. It may take the player a number of seconds to reach it. It would look strange if the elements involved were to remain stationary. But by the time the player arrives, it's likely things will be 'out of sync'. I wanted to post here in the hopes that others know of, or have implemented, a decent solution to this problem?

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >