Search Results

Search found 2088 results on 84 pages for 'jobs'.

Page 70/84 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Centos 5.5 install PearDB

    - by John Gardeniers
    Disclaimer: I use Linux for some jobs but I am not a Linux admin. I have a Centos 5.4 machine which performs some server duties and doubles as a web site development machine. PHP 5.3.3 was installed from RPM with the --without-pear option. I now wish to use PearDB but can't figure out how to install it. If I run yum install php-pear-db, it comes back with Error: Missing Dependency: php = 5.1.6-27.el5_5.3 is needed by package php-devel-5.1.6-27.el5_5.3.i386 (updates). The only RPM I've found that looks like it might be close currently has a dead link, so I can't even try that. What would be the best way to go about this? Is there a way to reinstall from the RPM and include pear? Can I install the dependency without breaking the current installation? Should I try to uninstall the original PHP and reinstall it from source, complete with pear? I thought this might have been an SU question but the FAQ over there suggests otherwise.

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • Does OS X support linux-like features?

    - by Xeoncross
    I have been using XP for almost a decade. Contrary to popular belief, it has served me well. In the last 4 years I don't remember ever having it crash on me. It has the most stable GUI I have ever used. However, an OS is only as good as it's GUI AND command line combined. Windows command line is awful and totally useless. So I have been using Ubuntu for a couple years and Debian on my servers. The only problem is that Gnome applications (ubuntu 6-10) constantly crash on me (Ubuntu Studio was the most unstable OS I ever used). I have high quality Gigabyte, MSI, and Asus motherboards and CPU's from old Semprons/Athlons to Celerons/Core 2 Quads. What are the odds that every PC I have ever owned can't remain stable with a linux GUI? Not to mention that Adobe CSx Suite doesn't work on linux. Anyway, I am now looking at moving to a MAC in the hope of finding a stable GUI and a feature-packed command line. Does Mac OS have an integrated command line where I can do linux-like-awesomeness like rsync, ssh, wget, crong jobs, package updates, and git without having an unstable GUI? Basically, until the linux GUI applications get a little better, is OS X what I need?

    Read the article

  • What are the right questions to ask when deciding whether to use Chef or Puppet?

    - by John Feminella
    I am about to start a new project which will, in part, require deploying many identical nodes of approximately three different classes: Data nodes, which will run sharded instances of MongoDB. Application nodes, which will run instances of a Ruby on Rails application and an older ASP.NET MVC application. Processing nodes, which will run jobs requested by the application nodes. ALl the nodes will run on instances of Ubuntu 10.04, though they will have different packages installed. I have some familiarity with Chef from previous projects, though I don't consider myself an expert. In an effort to do due diligence, I have been investigating alternative possibilities. We have a number of folks in-house who are long-time Puppet users, and they have encouraged me to take a look. I am having trouble evaluating both choices, though. Chef and Puppet share many of the same domain terminology -- packages, resources, attributes, and so on, and they have a common history that stems from taking different approaches to the same problem. So in some sense they are very similar. But much of the comparison information I've found, like this article, is a little outdated. If you were starting this project today, what questions would you ask yourself to decide whether you should use Chef or Puppet for configuration management? (Note: I don't want answer to the question "Should I use Chef or Puppet?")

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • Potential impact of large broadcast domains

    - by john
    I recently switched jobs. By the time I left my last job our network was three years old and had been planned very well (in my opinion). Our address range was split down into a bunch of VLANs with the largest subnet a /22 range. It was textbook. The company I now work for has built up their network over about 20 years. It's quite large, reaches multiple sites, and has an eclectic mix of devices. This organisation only uses VLANs for very specific things. I only know of one usage of VLANs so far and that is the SAN which also crosses a site boundary. I'm not a network engineer, I'm a support technician. But occasionally I have to do some network traces for debugging problems and I'm astounded by the quantity of broadcast traffic I see. The largest network is a straight Class B network, so it uses a /16 mask. Of course if that were filled with devices the network would likely grind to a halt. I think there are probably 2000+ physical and virtual devices currently using that subnet, but it (mostly) seems to work. This practise seems to go against everything I've been taught. My question is: In your opinion and  From my perspective - What measurement of which metric would tell me that there is too much broadcast traffic bouncing about the network? And what are the tell-tale signs that you are perhaps treading on thin ice? The way I see it, there are more and more devices being added and that can only mean more broadcast traffic, so there must be a threshold. Would things just get slower and slower, or would the effects be more subtle than that?

    Read the article

  • Global Email Forwarding with EXIM?

    - by Dexirian
    Been trying to find a solution to this for a while without success so here i go : I was given the task to build a High-Availability Load-Balanced Network Cluster for our 2 linux servers. I did some workaround and managed to get a DNS + SQL + Web Folders + Mails synchronisation going between both. Now i would like my server 2 to only do mailing and server 1 to only do web hosting. I transfered all the accounts for 1 to 2 using the WHM built-in account transfert feature. I created 2 different rsync jobs that sync, update, and delete the files for mail and websites. Now i was able to successfully transfer 1 mail accounts from 1 to 2, and the server 2 works flawlessly. All i had to do was change the MX entries to point to the new server and bingo. Now my problem is, some clients have their mail softwares configured so that they point to oldserver.domain.com. I cant make the (A) entry of oldserver.domain.com point to the new server for obvious reasons. I thought of using .foward files and add them to the home directories of the concerned users but that would be very difficult. So my question is : Is there a way to configure exim so that it will only foward mails to the new server? I need to change all the users so they use their mail on server 2 without them doing anything. Thanks! EDIT : TO CLARIFY MY PROBLEM Some clients have their mail point to oldserver.xyz instead of mail.olderserver.xyz I want to know if i can do something to prevent modifying the clients configuration I would also like to know is there is a way to find out what clients aren't properly configured

    Read the article

  • where are the default ulimit values set? (linux, centos)

    - by nomercysir
    I have two CentOS (5) servers with nearly identical specs. When I login and do: ulimit -u on one machine, I get unlimited and on the other, 77824 When I run a cron like * * * * * ulimit -u > ulimit.txt I get the same results (unlimited, 77824). I am trying to determine where these are set so that I can alter them. They are not set in any of my profiles (.bashrc, /etc/profile, etc .. these wouldn't affect cron anyway) nor in /etc/security/limits.conf (which is empty). I have scoured google and even gone so far as to do grep -Ir 77824 / but nothing has turned up so far. I don't understand how these machines could have come preset with different limits. I am actually wondering not for these machines, but for a different (CentOS 6) machine which has a limit of 1024, which is far too small. I need to run cron jobs with a higher limit and the only way I know how to set that is in the cron job itself. That's ok, but I'd rather set it system wide so it's not as hacky. Thanks for any help. This seems like it should be easy (NOT) EDIT -- SOLVED Ok, I figured this out. It seems to be an issue either with CentOS 6 or perhaps my machine configuration. On the CentOS 5 configuration, I can set in /etc/security/limits.conf: * - nproc unlimited and that would effectively update the accounts and cron limits. However, this does not work in my CentOS 6 box. Instead, I must do: myname1 - nproc unlimited myname2 - nproc unlimited ... And things work as expected. Maybe the UID specification works to, but the wildcard (*) definitely DOES NOT here. Oddly, wildcards DO work for the 'nofile' limit. I still would love to know where the default values are actually coming from, because by default, this file is empty and I couldn't see why I had different defaults for the two CentOS boxes, which had identical hardware and were from the same provider.

    Read the article

  • Error when Eclipse started and now my package explorer is empty!

    - by carpenteri
    Friends, Just a quick introduction, I'm currently learning Java, using a combination of the Head First Java book and Eclipse. Everything was going well until tonight! When I started up Eclipse tonight, I saw an error message which I didn't pay attention to (I know! I know!) and acknowledged after which the project explorer was empty where it used to contain my Head First project! After a quick "google" I found the workspace.metadata.log and the errors are shown below. The version of Eclipse I am using is: 20100218-1602 and the only plugin that I use is egit. Any help would be much appreciated. Thanks !SESSION 2010-06-08 19:24:33.841 ----------------------------------------------- eclipse.buildId=unknown java.version=1.5.0_22 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=en_GB Framework arguments: -product org.eclipse.epp.package.java.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.java.product !ENTRY org.eclipse.ui.workbench 4 2 2010-06-08 19:24:36.475 !MESSAGE Problems occurred when invoking code from plug-in: "org.eclipse.ui.workbench". !STACK 1 org.eclipse.ui.WorkbenchException: Content is not allowed in prolog. at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:121) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) ... 6 more !SUBENTRY 1 org.eclipse.ui 4 0 2010-06-08 19:24:36.475 !MESSAGE Content is not allowed in prolog. !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) !SUBENTRY 1 org.eclipse.ui 4 0 2010-06-08 19:24:36.475 !MESSAGE Content is not allowed in prolog. !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:94) at org.eclipse.ui.XMLMemento.createReadRoot(XMLMemento.java:64) at org.eclipse.ui.internal.Workbench$49.run(Workbench.java:1895) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.ui.internal.Workbench.restoreState(Workbench.java:1890) at org.eclipse.ui.internal.WorkbenchConfigurer.restoreState(WorkbenchConfigurer.java:183) at org.eclipse.ui.application.WorkbenchAdvisor$1.run(WorkbenchAdvisor.java:781) !ENTRY org.eclipse.jdt.ui 4 10001 2010-06-08 19:24:41.442 !MESSAGE Internal Error !STACK 1 org.eclipse.jdt.internal.ui.JavaUIException: Problems reading information from XML 'OpenTypeHistory.xml' at org.eclipse.jdt.internal.corext.util.History.createException(History.java:70) at org.eclipse.jdt.internal.corext.util.History.load(History.java:257) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.<init>(OpenTypeHistory.java:199) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.getInstance(OpenTypeHistory.java:185) at org.eclipse.jdt.internal.ui.JavaPlugin.initializeAfterLoad(JavaPlugin.java:381) at org.eclipse.jdt.internal.ui.InitializeAfterLoadJob$RealJob.run(InitializeAfterLoadJob.java:36) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) ... 6 more !SUBENTRY 1 org.eclipse.jdt.ui 4 4 2010-06-08 19:24:41.442 !MESSAGE Problems reading information from XML 'OpenTypeHistory.xml' !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.<init>(OpenTypeHistory.java:199) at org.eclipse.jdt.internal.corext.util.OpenTypeHistory.getInstance(OpenTypeHistory.java:185) at org.eclipse.jdt.internal.ui.JavaPlugin.initializeAfterLoad(JavaPlugin.java:381) at org.eclipse.jdt.internal.ui.InitializeAfterLoadJob$RealJob.run(InitializeAfterLoadJob.java:36) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) !ENTRY org.eclipse.jdt.ui 4 10001 2010-06-08 19:24:50.435 !MESSAGE Internal Error !STACK 1 org.eclipse.jdt.internal.ui.JavaUIException: Problems reading information from XML 'QualifiedTypeNameHistory.xml' at org.eclipse.jdt.internal.corext.util.History.createException(History.java:70) at org.eclipse.jdt.internal.corext.util.History.load(History.java:257) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.<init>(QualifiedTypeNameHistory.java:33) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.getDefault(QualifiedTypeNameHistory.java:26) at org.eclipse.jdt.internal.ui.JavaPlugin.stop(JavaPlugin.java:602) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$2.run(BundleContextImpl.java:843) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.stop(BundleContextImpl.java:836) at org.eclipse.osgi.framework.internal.core.BundleHost.stopWorker(BundleHost.java:474) at org.eclipse.osgi.framework.internal.core.AbstractBundle.suspend(AbstractBundle.java:546) at org.eclipse.osgi.framework.internal.core.Framework.suspendBundle(Framework.java:1098) at org.eclipse.osgi.framework.internal.core.StartLevelManager.decFWSL(StartLevelManager.java:593) at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:261) at org.eclipse.osgi.framework.internal.core.StartLevelManager.shutdown(StartLevelManager.java:216) at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.suspend(InternalSystemBundle.java:266) at org.eclipse.osgi.framework.internal.core.Framework.shutdown(Framework.java:685) at org.eclipse.osgi.framework.internal.core.Framework.close(Framework.java:583) at org.eclipse.core.runtime.adaptor.EclipseStarter.shutdown(EclipseStarter.java:409) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:200) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) Caused by: org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) ... 25 more !SUBENTRY 1 org.eclipse.jdt.ui 4 4 2010-06-08 19:24:50.435 !MESSAGE Problems reading information from XML 'QualifiedTypeNameHistory.xml' !STACK 0 org.xml.sax.SAXParseException: Content is not allowed in prolog. at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:264) at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:292) at org.eclipse.jdt.internal.corext.util.History.load(History.java:255) at org.eclipse.jdt.internal.corext.util.History.load(History.java:166) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.<init>(QualifiedTypeNameHistory.java:33) at org.eclipse.jdt.internal.corext.util.QualifiedTypeNameHistory.getDefault(QualifiedTypeNameHistory.java:26) at org.eclipse.jdt.internal.ui.JavaPlugin.stop(JavaPlugin.java:602) at org.eclipse.osgi.framework.internal.core.BundleContextImpl$2.run(BundleContextImpl.java:843) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.stop(BundleContextImpl.java:836) at org.eclipse.osgi.framework.internal.core.BundleHost.stopWorker(BundleHost.java:474) at org.eclipse.osgi.framework.internal.core.AbstractBundle.suspend(AbstractBundle.java:546) at org.eclipse.osgi.framework.internal.core.Framework.suspendBundle(Framework.java:1098) at org.eclipse.osgi.framework.internal.core.StartLevelManager.decFWSL(StartLevelManager.java:593) at org.eclipse.osgi.framework.internal.core.StartLevelManager.doSetStartLevel(StartLevelManager.java:261) at org.eclipse.osgi.framework.internal.core.StartLevelManager.shutdown(StartLevelManager.java:216) at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.suspend(InternalSystemBundle.java:266) at org.eclipse.osgi.framework.internal.core.Framework.shutdown(Framework.java:685) at org.eclipse.osgi.framework.internal.core.Framework.close(Framework.java:583) at org.eclipse.core.runtime.adaptor.EclipseStarter.shutdown(EclipseStarter.java:409) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:200) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:592) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311)

    Read the article

  • Compiling OpenCV in Android NDK

    - by evident
    PLEASE SEE THE ADDITIONS AT THE BOTTOM! The first problem is solved in Linux, not under Windows and Cygwin yet, but there is a new problem. Please see below! I am currently trying to compile OpenCV for Android NDK so that I can use it in my apps. For this I tried to follow this guide: http://www.stanford.edu/~zxwang/android_opencv.html But when compiling the downloaded stuff with ndk-build I get this error: $ /cygdrive/u/flori/workspace/android-ndk-r5b/ndk-build Compile++ thumb : opencv <= cvjni.cpp Compile++ thumb : cxcore <= cxalloc.cpp Compile++ thumb : cxcore <= cxarithm.cpp Compile++ thumb : cxcore <= cxarray.cpp Compile++ thumb : cxcore <= cxcmp.cpp Compile++ thumb : cxcore <= cxconvert.cpp Compile++ thumb : cxcore <= cxcopy.cpp Compile++ thumb : cxcore <= cxdatastructs.cpp Compile++ thumb : cxcore <= cxdrawing.cpp Compile++ thumb : cxcore <= cxdxt.cpp Compile++ thumb : cxcore <= cxerror.cpp Compile++ thumb : cxcore <= cximage.cpp Compile++ thumb : cxcore <= cxjacobieigens.cpp Compile++ thumb : cxcore <= cxlogic.cpp Compile++ thumb : cxcore <= cxlut.cpp Compile++ thumb : cxcore <= cxmathfuncs.cpp Compile++ thumb : cxcore <= cxmatmul.cpp Compile++ thumb : cxcore <= cxmatrix.cpp Compile++ thumb : cxcore <= cxmean.cpp Compile++ thumb : cxcore <= cxmeansdv.cpp Compile++ thumb : cxcore <= cxminmaxloc.cpp Compile++ thumb : cxcore <= cxnorm.cpp Compile++ thumb : cxcore <= cxouttext.cpp Compile++ thumb : cxcore <= cxpersistence.cpp Compile++ thumb : cxcore <= cxprecomp.cpp Compile++ thumb : cxcore <= cxrand.cpp Compile++ thumb : cxcore <= cxsumpixels.cpp Compile++ thumb : cxcore <= cxsvd.cpp Compile++ thumb : cxcore <= cxswitcher.cpp Compile++ thumb : cxcore <= cxtables.cpp Compile++ thumb : cxcore <= cxutils.cpp StaticLibrary : libstdc++.a StaticLibrary : libcxcore.a Compile++ thumb : cv <= cvaccum.cpp Compile++ thumb : cv <= cvadapthresh.cpp Compile++ thumb : cv <= cvapprox.cpp Compile++ thumb : cv <= cvcalccontrasthistogram.cpp Compile++ thumb : cv <= cvcalcimagehomography.cpp Compile++ thumb : cv <= cvcalibinit.cpp Compile++ thumb : cv <= cvcalibration.cpp Compile++ thumb : cv <= cvcamshift.cpp Compile++ thumb : cv <= cvcanny.cpp Compile++ thumb : cv <= cvcolor.cpp Compile++ thumb : cv <= cvcondens.cpp Compile++ thumb : cv <= cvcontours.cpp Compile++ thumb : cv <= cvcontourtree.cpp Compile++ thumb : cv <= cvconvhull.cpp Compile++ thumb : cv <= cvcorner.cpp Compile++ thumb : cv <= cvcornersubpix.cpp Compile++ thumb : cv <= cvderiv.cpp Compile++ thumb : cv <= cvdistransform.cpp Compile++ thumb : cv <= cvdominants.cpp Compile++ thumb : cv <= cvemd.cpp Compile++ thumb : cv <= cvfeatureselect.cpp Compile++ thumb : cv <= cvfilter.cpp Compile++ thumb : cv <= cvfloodfill.cpp Compile++ thumb : cv <= cvfundam.cpp Compile++ thumb : cv <= cvgeometry.cpp Compile++ thumb : cv <= cvhaar.cpp Compile++ thumb : cv <= cvhistogram.cpp Compile++ thumb : cv <= cvhough.cpp Compile++ thumb : cv <= cvimgwarp.cpp Compile++ thumb : cv <= cvinpaint.cpp Compile++ thumb : cv <= cvkalman.cpp Compile++ thumb : cv <= cvlinefit.cpp Compile++ thumb : cv <= cvlkpyramid.cpp Compile++ thumb : cv <= cvmatchcontours.cpp Compile++ thumb : cv <= cvmoments.cpp Compile++ thumb : cv <= cvmorph.cpp Compile++ thumb : cv <= cvmotempl.cpp Compile++ thumb : cv <= cvoptflowbm.cpp Compile++ thumb : cv <= cvoptflowhs.cpp Compile++ thumb : cv <= cvoptflowlk.cpp Compile++ thumb : cv <= cvpgh.cpp Compile++ thumb : cv <= cvposit.cpp Compile++ thumb : cv <= cvprecomp.cpp Compile++ thumb : cv <= cvpyramids.cpp Compile++ thumb : cv <= cvpyrsegmentation.cpp Compile++ thumb : cv <= cvrotcalipers.cpp Compile++ thumb : cv <= cvsamplers.cpp Compile++ thumb : cv <= cvsegmentation.cpp Compile++ thumb : cv <= cvshapedescr.cpp Compile++ thumb : cv <= cvsmooth.cpp Compile++ thumb : cv <= cvsnakes.cpp Compile++ thumb : cv <= cvstereobm.cpp Compile++ thumb : cv <= cvstereogc.cpp Compile++ thumb : cv <= cvsubdivision2d.cpp Compile++ thumb : cv <= cvsumpixels.cpp Compile++ thumb : cv <= cvsurf.cpp Compile++ thumb : cv <= cvswitcher.cpp Compile++ thumb : cv <= cvtables.cpp Compile++ thumb : cv <= cvtemplmatch.cpp Compile++ thumb : cv <= cvthresh.cpp Compile++ thumb : cv <= cvundistort.cpp Compile++ thumb : cv <= cvutils.cpp StaticLibrary : libcv.a SharedLibrary : libopencv.so U:/flori/workspace/android-ndk-r5b/toolchains/arm-linux-androideabi-4.4.3/prebui lt/windows/bin/../lib/gcc/arm-linux-androideabi/4.4.3/../../../../arm-linux-andr oideabi/bin/ld.exe: cannot find -lcxcore collect2: ld returned 1 exit status make: *** [/cygdrive/u/flori/workspace/android/testOpenCV/obj/local/armeabi/libo pencv.so] Error 1 I am trying to compile it on a Windows system and with the newest NDK version... Does anybody have an idea what this linking error means and what I can to to have it work again? Would be great if anybody could help After getting the problem to work I found that there is another way of compiling OpenCV for Android, using the current version of OpenCV (instead of the 1.1 one from above) and the modified Android NDK from crystax, which supports STL and exceptions and therefore supports the newest OpenCV Version. All information on that can be found here: http://opencv.willowgarage.com/wiki/Android There it says to download the current svn trunk and the crystax-r4 android-ndk, as well as swig, which I did. I entered the folder, created the build directory, ran cmake and then built the static libs, which seemed to work. At least it successfully ran the make-command without errors. I now wanted to build the shared libraries so I entered the android-jni folder and ran 'make' again, but got this error: % make -j4 OPENCV_CONFIG = ../build/android-opencv.mk make clean-swig &&\ mkdir -p jni/gen &&\ mkdir -p src/com/opencv/jni &&\ swig -java -c++ -package "com.opencv.jni" \ -outdir src/com/opencv/jni \ -o jni/gen/android_cv_wrap.cpp jni/android-cv.i OPENCV_CONFIG = ../build/android-opencv.mk make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. rm -f jni/gen/android_cv_wrap.cpp make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-ndk-r4-crystax/ndk-build OPENCV_CONFIG=../build/android-opencv.mk \ PROJECT_PATH= ARM_TARGETS="armeabi armeabi-v7a" V= /home/florian/android-ndk-r4-crystax/ndk-build OPENCV_CONFIG=../build/android-opencv.mk \ PROJECT_PATH= ARM_TARGETS="armeabi armeabi-v7a" V= make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: Entering directory `/home/florian/android-opencv-willowgarage/android/android-jni' /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. /home/florian/android-opencv-willowgarage/android/android-jni/jni/Android.mk:10: ../build/android-opencv.mk: No such file or directory make[1]: *** No rule to make target `../build/android-opencv.mk'. Stop. make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' make: *** [libs/armeabi/libandroid-opencv.so] Error 2 make: *** Waiting for unfinished jobs.... make[1]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule. make[1]: *** No rule to make target `../build/android-opencv.mk'. Stop. make[1]: Leaving directory `/home/florian/android-opencv-willowgarage/android/android-jni' make: *** [libs/armeabi-v7a/libandroid-opencv.so] Error 2 Does anybody have an idea what this means and what I can do to build the shared libraries? ... Ok after having a look at the error message it came to me that it seems to have something missing in the build directory... but there wasn't even a build directory in the android folder so I created one, ran 'cmake' in there and 'make' again but get this error: Compile thumb : opencv_lapack <= /home/florian/android-opencv-willowgarage/3rdparty/lapack/sgetrf.c Compile thumb : opencv_lapack <= /home/florian/android-opencv-willowgarage/3rdparty/lapack/scopy.c Compile++ thumb: opencv_core <= /home/florian/android-opencv-willowgarage/modules/core/src/matrix.cpp cc1plus: error: /home/florian/android-opencv-willowgarage/android/../modules/index.rst/include: Not a directory make[3]: *** [/home/florian/android-opencv-willowgarage/android/build/obj/local/armeabi/objs/opencv_core/src/matrix.o] Error 1 make[3]: *** Waiting for unfinished jobs.... make[2]: *** [android-opencv] Error 2 make[1]: *** [CMakeFiles/ndk.dir/all] Error 2 make: *** [all] Error 2 Anybody know what this means?

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Install Quartz.Net as a windows service and Test installation

    - by Tarun Arora
    In this blog post I’ll be covering, 01: Where to download Quartz.net from 02: How to install Quartz.net as a Windows service 03: Test the Quartz.net Installation If you are new to Quartz.net I would recommend reading the blog post on a brief introduction to Quartz.net. 01 – Where to download Quartz.net? http://sourceforge.net/projects/quartznet/files/quartznet/       Currently version  Quartz.Net 2.0.1 is the recommended download version. 02 – How to install Quartz.net as a Windows service         Go to the download location and unzip the Quartz.net package Navigate to the folder Quartz.Net \ Server \ bin – This is where you will find different .net version installers of the quartz.net packages. For example in the screen shot above, you can see the Quartz.net .net 3.5 and .net 4 packages. Open up the Quartz.net .net 4.0 folder, this folder contains the files you need to install Quartz.net as a windows service Copy the contents of the folder Downloads\Quartz.NET-2.0.1\server\bin\4.0 to the folder %program files%\Quartz.net   5. Open up a new CMD as an administrator and run the below command to install Quartz.net as a windows service /> Quartz.Server.exe install 6. How do I know that Quartz.Net service has installed as a Windows service? Go to run prompt and type ‘services.msc’ you should now see all the windows services installed on your machine. Navigate down to look for Quartz.Net. The service installs itself as an automatic startup Type and log on as ‘Local System’. You can easily change this to your prefer account that you would like to run the service as. If you wanted to name the Quartz service something else then that’s also possible… Can I change the default display name of the quartz.net windows service? Yes, you can! Navigate to C:\Program Files (x86)\Quartz.Net\ and open up the config file ‘quartz.config’ - You can change the instance name - You can change the default thread count of 10 - The port that the service listens to (by default this is port 555) A blog post on more configuration details can be found here. 03 – Test Quartz.Net windows service installation So, I have installed Quartz.Net as a windows service, how do I test whether my installation has been successful. Open up cmd as an administrator and run the below command, C:\Program Files (x86)\Quartz.Net> Quartz.Server.exe –i Since by default the Quartz.net windows service writes INFO level diagnostics (this can be changed from Quartz.Server.exe.config) you should see the service information show up on the console. For instance in the example above I can see that the service is running in a NON CLUSTERED mode, its currently not started and is currently in standby mode with 0 number of jobs executed so far… This was second in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering how to run your first scheduled task using Quartz.net windows service. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008

    - by pinaldave
    Note: Please read the complete post before taking any actions. This blog post would discuss SHRINKFILE and TRUNCATE Log File. The script mentioned in the email received from reader contains the following questionable code: “Hi Pinal, If you could remember, I and my manager met you at TechEd in Bangalore. We just upgraded to SQL Server 2008. One of our jobs failed as it was using the following code. The error was: Msg 155, Level 15, State 1, Line 1 ‘TRUNCATE_ONLY’ is not a recognized BACKUP option. The code was: DBCC SHRINKFILE(TestDBLog, 1) BACKUP LOG TestDB WITH TRUNCATE_ONLY DBCC SHRINKFILE(TestDBLog, 1) GO I have modified that code to subsequent code and it works fine. But, are there other suggestions you have at the moment? USE [master] GO ALTER DATABASE [TestDb] SET RECOVERY SIMPLE WITH NO_WAIT DBCC SHRINKFILE(TestDbLog, 1) ALTER DATABASE [TestDb] SET RECOVERY FULL WITH NO_WAIT GO Configuration of our server and system is as follows: [Removed not relevant data]“ An email like this that suddenly pops out in early morning is alarming email. Because I am a dead, busy mind, so I had only one min to reply. I wrote down quickly the following note. (As I said, it was a single-minute email so it is not completely accurate). Here is that quick email shared with all of you. “Hi Mr. DBA [removed the name] Thanks for your email. I suggest you stop this practice. There are many issues included here, but I would list two major issues: 1) From the setting database to simple recovery, shrinking the file and once again setting in full recovery, you are in fact losing your valuable log data and will be not able to restore point in time. Not only that, you will also not able to use subsequent log files. 2) Shrinking file or database adds fragmentation. There are a lot of things you can do. First, start taking proper log backup using following command instead of truncating them and losing them frequently. BACKUP LOG [TestDb] TO  DISK = N'C:\Backup\TestDb.bak' GO Remove the code of SHRINKING the file. If you are taking proper log backups, your log file usually (again usually, special cases are excluded) do not grow very big. There are so many things to add here, but you can call me on my [phone number]. Before you call me, I suggest for accuracy you read Paul Randel‘s two posts here and here and Brent Ozar‘s Post here. Kind Regards, Pinal Dave” I guess this post is very much clear to you. Please leave your comments here. As mentioned, this is a very huge subject; I have just touched a tip of the ice-berg and have tried to point to authentic knowledge. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – SQL Server Technology Evangelists and Evangelism

    - by pinaldave
    This is the exact conversation that I had with three people during the recent SQL Server Public Training. Person 1: “Are you an SQL Server Evangelist?” Pinal : “No, but Vinod Kumar is.” Person 1: “Who are you?” Person 2: “He is Pinal, haha!” Person 1: “I know that, but don’t you evangelize SQL Server Technology?” Pinal : “Hmm… I do that…” Person 1: “In that case, why don’t you call yourself an Evangelist?” Pinal : “…! …” Person 2: “Good Question! Who are you Pinal?” Pinal : “I think you are asking my title, is that correct?” Person 1: “Maybe.” Pinal : “I am a Mentor, and I work for Solid Quality Mentors.” Person 2: “I have seen you listing yourself as the Founder of SQLAuthority.com… so…” Pinal : “Yeah that’s true.” Person 3: “Let me summarize what these people are asking. What they are asking is that you can have multiple titles, so is being an evangelist one of your titles or not?” Pinal : “Well, I am an SQL Server MVP and lots of people say that we are also evangelists of technology. In fact,  we are all evangelists of technology, aren’t we?” Person 1: “So let me come back to my original topic: If you are an SQL Server Evangelist, then what is this evangelism?” Person 2: “And who is Vinod Kumar – I have heard about him a lot.” Pinal : “Oh okay. Now I got it. Let me explain …” The answer was quite long but since this conversation, I have been thinking about the words “evangelist” and “evangelism.” I think being an evangelist is one of the most respected jobs in the world and to do this job one must bear lots of responsibilities. There were two questions asked to me, so let me answer both one by one. Who is Vinod Kumar? Vinod Kumar is a Technology Evangelist for Microsoft and one of the most respected persons in the SQL Server Community in India. Let me copy-paste my note from the previous TechEd India 2010 article. “I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller so there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod was the best speaker in the event. He did not have a single time that disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL.” Pinal Dave and Vinod Kumar What is Technology Evangelism? Here I am listing three posts written by Vinod Kumar, wherein he talks about Technology Evangelism and Technology Evangelist in an in-depth manner. They are highly-regarded articles in the Community. Evangelism beyond boundaries with an Evangelists !!! Technology Evangelism Demystified New face of Online Technology Evangelism I strongly recommend reading them all. These are wonderful blog posts. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database

    - by pinaldave
    While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. Before we continue the resolution, let us understand what CXPACKET Wait Stats are. The official definition suggests that CXPACKET Wait Stats occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if a conflict concerning this wait type develops into a problem. (from BOL) In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Note that CXPACKET Wait is done by completed thread and not the one which are unfinished. “Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is also unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query.” Now let us see what the best practices to reduce the CXPACKET Wait Stats are. The suggestions, with which you will find that if you search online through the browser, would play a major role as and might be asked about their jobs In addition, might tell you that you should set ‘maximum degree of parallelism’ to 1. I do agree with these suggestions, too; however, I think this is not the final resolutions. As soon as you set your entire query to run on single CPU, you will get a very bad performance from the queries which are actually performing okay when using parallelism. The best suggestion to this is that you set ‘the maximum degree of parallelism’ to a lower number or 1 (be very careful with this – it can create more problems) but tune the queries which can be benefited from multiple CPU’s. You can use query hint OPTION (MAXDOP 0) to run the server to use parallelism. Here is the two-quick script which helps to resolve these issues: Change MAXDOP at Server Level EXEC sys.sp_configure N'max degree of parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Run Query with all the CPU (using parallelism) USE AdventureWorks GO SELECT * FROM Sales.SalesOrderDetail ORDER BY ProductID OPTION (MAXDOP 0) GO Below is the blog post which will help you to find all the parallel query in your server. SQL SERVER – Find Queries using Parallelism from Cached Plan Please note running Queries in single CPU may worsen your performance and it is not recommended at all. Infect this can be very bad advise. I strongly suggest that you identify the queries which are offending and tune them instead of following any other suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Installation

    - by StuartBrierley
    As previously detailed, I have completed a single server installation of BizTalk Server 2009 standard on my development laptop; a MacBook Pro Core2Duo running at 2.16Ghz with 2Gb of RAM.  Following this I also posted on my use of the BizTalk Server Best Practices Anaylser and how to configure the BizTalk SQL Server Jobs.  All of which means that I should have some confidence that I have a decent working BizTalk Server 2009 environment, Next I thought that it would be a good idea to try and get some idea of how this setup performs by carrying out some baseline tests that can then be replicated on the test and live servers. The aim of this would be to allow confident predictions to be made of how any solutions developed on a single "server" installation may be expected to perform when deployed to these multi-server BizTalk Server 2009 standard installations. The BizTalk Benchmark Wizard would seem to be the perfect tool for the job. The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected. This utility should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment: "The executed scenarios may or may not be relative to any realistic scenario, and is only intended for testing. The BizTalk Benchmark Wizard has been developed in relation to the BizTalk Server 2009 Scale Out Testing Study. More information about the study can be found here: http://msdn.microsoft.com/en-us/library/ee377068(BTS.10).aspx" After downloading and installing the wizard you will need set up the Hosts, Instances and Adapter handlers.  This is done by running a script file using the “cscript” detailed below.  To do this you will need to open a command prompt window and navigate to the script folder; assuming the default installation location this would be C:\Program Files\Blogical\BizTalk Benchmark Wizard\Artefacts\BizTalk. In this folder you should find an InstallHosts.vbs file which can be executed using the following parameters: NTGroupName - The name of the Windows NT group. UserName – The name of the user account running the service instances. Password – The password of the user account running the service instances. Receive Host – The name of the server where you want to run the receive host instance.  Send Host - The name of the server where you want to run the sen host instance. Processing Host - The name of the server where you want to run the process host instance. By default the script is set up for 64 bit hosts, so if you are running in 32 bit environment make sure that you change the following line in the script before continuing: from:   objHS.IsHost32BitOnly = False to:    objHS.IsHost32BitOnly = True If you have a single box installation, your script command might look like this: cscript InstallHosts.vbs "BizTalk Application Users" “\MyUser” “MyPassword” “BtsServer1” “BtsServer1” “BtsServer1” If you have a multi server installation, your script command might look like this: cscript InstallHosts.vbs "MyDomain\BizTalk Application Users" “MyDomain\MyUser” “MyPassword” “BtsServer1” “BtsServer2” “BtsServer2” Running this script will create: Three hosts (BBW_RxHost, BBW_TxHost and BBW_PxHost) Three host instances One send and one receive adapter handler for the WCF NetTcp adapter. You will then need to import the BizTalk MSI via the BizTalk Administration Console.  Open the BizTalk Administration Console, point to the “Applications” node and import the BizTalk Benchmark Wizard.msi found in the same folder as the script above. This will create a “BizTalk Benchmark Wizard” application along with all ports and orchestrations needed. To finish the installation you will need to run the BizTalk Benchmark Wizard.msi on all BizTalk servers to add the assemblies to the Global Assembly Cache (GAC). Next I will look at running the BizTalk Benchmark Wizard.

    Read the article

  • 11 Types of Developers

    - by Lee Brandt
    Jack Dawson Jack Dawson is the homeless drifter in Titanic. At one point in the movie he says, “I figure life’s a gift, and I don’t intend on wasting it.” He is happy to wander wherever life takes him. He works himself from place to place, making just enough money to make it to his next adventure. The “Jack Dawson” developer clings on to any new technology as the ‘next big thing’, and will find ways to shoe-horn it in to places where it is not a fit. He is very appealing to the other developers because they want to try the newest techniques and tools too, He will only stay until the new technology either bores him or becomes problematic. Jack will also be hard to find once the technology has been implemented, because he will be on to the next shiny thing. However, having a Jack Dawson on your team can be beneficial. Jack can be a great ally when attempting to convince a stodgy, corporate entity to upgrade. Jack usually has an encyclopedic recall of all the new features of the technology upgrade and is more than happy to interject them in any conversation. Tom Smykowski Tom is the neurotic employee in Office Space, and is deathly afraid of being fired. He will do only what is necessary to keep the status quo. He believes as long as nothing changes, his job is safe. He will scoff at anything new and be the naysayer during any change initiative. Tom can be useful in off-setting Jack Dawson. Jack will constantly be pushing for change and Tom will constantly be fighting it. When you see that Jack is getting kind of bored with a new technology and Tom has finally stopped wetting himself at the mere mention of it, then it is probably the sweet spot of beginning to implement that new technology (providing it is the right tool for the job). Ray Consella Ray is the guy who built the Field of Dreams. He took a risk. Sometimes he screwed it up, but he knew he didn’t want to end up regretting not attempting it. He constantly doubted himself, but he knew he had to keep going. Granted, he was doing what the voices in his head were telling him to do, but my point is he was driven to do something that most people considered crazy. Even when his friends, his wife and even he told himself he was crazy, somewhere inside himself, he knew it was the right thing to do. These are the innovators. These are the Bill Gates and Steve Jobs of the world. The take risks, they fail, they learn and the get better. Obviously, this kind of person thrives in start-ups and smaller companies, but that is due to their natural aversion to bureaucracy. They want to see their ideas put into motion quickly, and withdrawn quickly if it doesn’t work. Short feedback cycles are essential to Ray. He wants to know if his idea is working or not. He wants to modify or reverse his idea if it is not working or makes things worse. These are the agilistas. May I always be one.

    Read the article

  • How to Create Views for All Tables with Oracle SQL Developer

    - by thatjeffsmith
    Got this question over the weekend via a friend and Oracle ACE Director, so I thought I would share the answer here. If you want to quickly generate DDL to create VIEWs for all the tables in your system, the easiest way to do that with SQL Developer is to create a data model. Wait, why would I want to do this? StackOverflow has a few things to say on this subject… So, start with importing a data dictionary. Step One: Open of Create a Model In SQL Developer, go to View – Data Modeler – Browser. Then in the browser panel, expand your design and create a new Relational Model. Step Two: Import your Data Dictionary This is a fancy way of saying, ‘suck objects out of the database into my model’ This will open a wizard to connect, select your schema(s), objects, etc. Once they’re in your model, you’re ready to cook with gas I’m using HR (Human Resources) for this example. You should end up with something that looks like this. Our favorite HR model Now we’re ready to generate the views! Step Three: Auto-generate the Views Go to Tools – Data Modeler – Table to View Wizard. I don’t want all my tables included, and I want to change the naming standard Decide if you want to change the default generated view names By default the views will be created as ‘V_TABLE_NAME.’ If you don’t like the ‘V_’ you can enter your own. You also can reference the object and model name with variables as shown in the screenshot above. I’m going to go with something a little more personal. The views are the little green boxes in the diagram Can’t find your views? They should be grouped together in your diagram. Don’t forget to use the Navigator to easily find and navigate to those model diagram objects! Step Four: Generate the DDL Ok, let’s use the Generate DDL button on the toolbar. Un-check everything but your views If you used a prefix, take advantage of that to create a filter. You might have existing views in your model that you don’t want to include, right? Once you click ‘OK’ the DDL will be generated. -- Generated by Oracle SQL Developer Data Modeler 4.0.0.825 -- at: 2013-11-04 10:26:39 EST -- site: Oracle Database 11g -- type: Oracle Database 11g CREATE OR REPLACE VIEW HR.TJS_BLOG_COUNTRIES ( COUNTRY_ID , COUNTRY_NAME , REGION_ID ) AS SELECT COUNTRY_ID , COUNTRY_NAME , REGION_ID FROM HR.COUNTRIES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_EMPLOYEES ( EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID FROM HR.EMPLOYEES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOBS ( JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY ) AS SELECT JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY FROM HR.JOBS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOB_HISTORY ( EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID FROM HR.JOB_HISTORY ; CREATE OR REPLACE VIEW HR.TJS_BLOG_LOCATIONS ( LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID ) AS SELECT LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID FROM HR.LOCATIONS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_REGIONS ( REGION_ID , REGION_NAME ) AS SELECT REGION_ID , REGION_NAME FROM HR.REGIONS ; -- Oracle SQL Developer Data Modeler Summary Report: -- -- CREATE TABLE 0 -- CREATE INDEX 0 -- ALTER TABLE 0 -- CREATE VIEW 6 -- CREATE PACKAGE 0 -- CREATE PACKAGE BODY 0 -- CREATE PROCEDURE 0 -- CREATE FUNCTION 0 -- CREATE TRIGGER 0 -- ALTER TRIGGER 0 -- CREATE COLLECTION TYPE 0 -- CREATE STRUCTURED TYPE 0 -- CREATE STRUCTURED TYPE BODY 0 -- CREATE CLUSTER 0 -- CREATE CONTEXT 0 -- CREATE DATABASE 0 -- CREATE DIMENSION 0 -- CREATE DIRECTORY 0 -- CREATE DISK GROUP 0 -- CREATE ROLE 0 -- CREATE ROLLBACK SEGMENT 0 -- CREATE SEQUENCE 0 -- CREATE MATERIALIZED VIEW 0 -- CREATE SYNONYM 0 -- CREATE TABLESPACE 0 -- CREATE USER 0 -- -- DROP TABLESPACE 0 -- DROP DATABASE 0 -- -- REDACTION POLICY 0 -- -- ERRORS 0 -- WARNINGS 0 You can then choose to save this to a file or not. This has a few steps, but as the number of tables in your system increases, so does the amount of time this feature can save you!

    Read the article

  • POP Forums v9 Beta 1 for ASP.NET MVC 3 posted to CodePlex!

    - by Jeff
    As promised, I posted a beta build of my forum app for ASP.NET MVC 3. Get the new goodies here: http://popforums.codeplex.com/releases/view/58228 This is the first beta for the ASP.NET MVC 3 version of POP Forums. It is nearly feature complete, and ready for testing and feedback. For previous release notes, look here, here and here.Check out the live preview: http://preview.popforums.com/ForumsSetup instructions are on the home page of this project. The new hotness in the beta, or what has been done since the last preview: All views converted to use Razor E-mail subscription/notification of new posts New post indicators/mark read buttons Permalinks to posts Jump to newest post (from new post indicators) Recent topics Favorite topics Moderator functions for topics (pin/close/delete, plus move and rename) Search, ported from v8. Not a ton of optimization here, or new unit testing, but the old version worked pretty well User posts (topics the user posted in) Forgot password Vanity items (signatures and avatars) Hide vanity items per user preference Some minor data caching where appropriate A little bit of UI refinement Lots-o-bug fixes Lots-o-unit tests What's next? The plan between now and the next beta is as follows: Continue working through features/tasks, and fix bugs as they're reported Integrate the forum into a real, production site Refine the UI Refactor as much as possible... the code organization is not entirely logical in some places After the second beta, a release candidate will follow, with a real "final" release after that. Subsequent releases should come relatively frequently and without a lot of risk. The trick in building this thing has been that it mostly tossed the previous WebForms version, which was all full of crusties. The time table for this is a little harder to pin down, as day jobs and families will have their effect. Other notes Refactoring will be a priority. As the features of MVC have evolved, so have my desires to use it in a fashion that makes things clear and easy to follow. I don't even know if anyone will ever start mucking around in the code, but on the off chance they do, I'd like what they find to not suck. Other nice-to-haves are builds to target Windows Azure and SQL CE. A nice setup UI would be super too. I think the ASP.NET MVC world has gone long enough without a decent forum.The biggest challenge that I've found is making the forum something that can be dropped in any app. While it does rope its views into an area, areas are mostly just routing details. I haven't thought of a clever way yet to limit dependency injection, for example, to just the forum bits. I mean, everyone should be using Ninject, but how realistic is that? ;)How much time and effort should you spend on POP Forums in its current state? Change is inevitable, but at this point I'm reasonably committed to not changing the database schema. I really think it will stay as-is. All bets are off for the various interfaces throughout the app, but the data should generally resist change. It's not even that different from v8, which was one of the original goals because I didn't want to rewrite SQL or introduce a new ORM or whatever. My point is that if you wanted to build a site around this today, even though it's not entirely functional, I think it's low risk in terms of data loss. I can't vouch for whether or not you know what you're doing.I've been having some chats with people lately about quoting posts, and honestly there has to be something better and straight forward. That continues to be a holy grail of mine, and some day, I hope to find it.Enjoy... it's starting to feel more real every day!

    Read the article

  • Making a Job Change That's Easy Why Not Try a Career Change

    - by david.talamelli
    A few nights ago I received a comment on one of our blog posts that reminded me of a statistic that I heard a while back. The statistic reflected the change in our views towards work and showed how while people in past generations would stay in one role for their working career - now with so much choice people not only change jobs often but also change careers 4-5 times in their working life. To differentiate between a job change and a career change: when I say job change this could be an IT Sales person moving from one IT Sales role to another IT Sales role. A Career change for example would be that same IT Sales person moving from IT Sales to something outside the scope of their industry - maybe to something like an Engineer or Scuba Dive Instructor. The reason for Career changes can be as varied as the people who make them. Someone's motivation could be to pursue a passion or maybe there is a change in their personal circumstances forcing the change or it could be any other number of reasons. I think it takes courage to make a Career change - it can be easy to stay in your comfort zone and do what you know, but to really push yourself sometimes you need to try something new, it is a matter of making that career transition as smooth as possible for yourself. The comment that was posted is here below (thanks Dean for the kind words they are appreciated). Hi David, I just wanted to let you know that I work for a company called Milestone Search in Melbourne, Victoria Australia. (www.mstone.com.au) We subscribe to your feed on a daily basis and find your blogs both interesting and insightful. Not to mention extremely entertaining. I wonder if you have missed out on getting in journalism as this seems to be something you'd be great at ?: ) Anyways back to my point about changing careers. This could be anything from going from I.T. to Journalism, Engineering to Teaching or any combination of career you can think of. I don't think there ever has been a time where we have had so many opportunities to do so many different things in our working life. While this idea sounds great in theory, putting it into practice would be much harder to do I think. First, in an increasingly competitive job market, employers tend to look for specialists in their field. You may want to make a change but your options may be limited by the number of employers willing to take a chance on someone new to an industry that will likely require a significant investment in time to get brought up to speed. Also, using myself as an example if I was given the opportunity to move into Journalism/Communication/Marketing career from my career as an IT Recruiter - realistically I would have to take a significant pay cut to make this change as my current salary reflects the expertise I have in my current career. I would not immediately be up to speed moving into a new career and would not be able to justify a similar salary. Yes there are transferable skills in any career change, but even though you may have transferable skills you must realise that you will also have a large amount of learning to do which would take time. These are two initial hurdles that I immediately think of, there may be more but nothing is insurmountable. If you work out what you want to do with your working career whatever that may be, you then need to just need to work out the steps to get to your end goal. This is where utilising the power of your networks and using Social Media can come in handy. If you are interested in working somewhere why not proactively take the opportunity to research the industry or company - find out who it is you need to speak to and get in touch with them. We spend so much time working, we should enjoy the work we do and not be afraid to try new things. Waiting for your dream job to fall into your lap or be handed to you on a silver platter is not likely going to happen, so if there is something you do want to do, work out a plan to make it happen and chase after it. This article was originally posted on David Talamelli's Blog - David's Journal on Tap

    Read the article

  • So No TECH job so far.

    - by Ratman21
    O I found some temp work for the US Census and I have managed to keep the house (so far) but, it looks like I/we are going to have to do a short sale and the temp job will be ending soon.   On top of that it looks like the unemployment fund for me is drying up. I will have about one month left after the Census job is done. I am now down to Appling for work at the KFC.   This is type a work I started with, before I was a tech geek and really I didn’t think I would be doing this kind of work in my later years but, I have a wife and kid. So I got to suck it up and do it.   Oh and here is my new resume…go ahead I know you want to tare it up. I really don’t care any more.   Scott L. Newman 45219 Dutton Way, Callahan, FL32011 H: (904)879-4880 C: (352)356-0945 E: [email protected] Web:  http://beingscottnewman.webs.com/                                                       ______                                                                                 OBJECTIVE To obtain a Network or Technical support position     KEYWORD SUMMARY CompTIA A+, Network+, and Security+ Certified., Network Operation, Technical Support, Client/Vendor Relations, Networking/Administration, Cisco Routers/Switches, Helpdesk, Microsoft Office Suite, Website Design/Dev./Management, Frame Relay, ISDN, Windows NT/98/XP, Visio, Inventory Management, CICS, Programming, COBOL IV, Assembler, RPG   QUALIFICATIONS SUMMARY Twenty years’ experience in computer operations, technical support, and technical writing. Also have two and half years’ experience in internet / intranet operations.   PROFESSIONAL EXPERIENCE October 2009 – Present*   Volunteer Web site and PC technician – Part time       True Faith Christian Fellowship Church – Callahan, FL, Project: Create and maintain web site for Church to give it a worldwide exposure Aug 2008 – September 2009:* Volunteer Church sound and video technician – Part time      Thomas Creek Baptist Church – Callahan, FL   *Note Jobs were for the learning and/or keeping updated on skills, while looking for a tech job and training for new skills.   February 2005 to October 2008: Client Server Dev/Analyst I, Fidelity National Information Services, Jacksonville, FL (FNIS acquired Certegy in 2005 and out of 20 personal, was one of three kept on.) August 2003 to February 2005: Senior NetOps Operator, Certegy, St.Pete, Fl. (August 2003, Certegy terminated contract with EDS and out of 40 personal, was one of six kept on.) Projects: Creation and update of listing and placement for all raised floor equipment at St.Pete site. Listing was made up of, floor plan of the raised floor and equipment racks diagrams showing the placement of all devices using Visio. This was cross-referenced with an inventory excel document showing what dept was responsible for each device. Sole creator of Network operation and Server Operation procedures guide (NetOps Guide).  Expertise: Resolving circuit and/or router issues or assist circuit carrier in resolving issue from the company Network Operation Center (NOC). As well as resolving application problems or assist application support in resolution of it.     July 1999 to August 2003: Senior NetOps Operator,EDS (Certegy Account), St.Pete, FL Same expertise and on going projects as listed above for FNIS/Certegy. (Equifax outsourced the NetOps dept. to EDS in 1999)         January 1991 to July 1999: NetOps/Tandem Operator, Equifax, St.Pete & Tampa, FL Same as all of the above for FNIS/Certegy/EDS except for circuit and router issues   EDUCATION ? New Horizons Computer Learning Center, Jacksonville, Florida - CompTIA A+, Security+, and     Network+ Certified.                        Currently working on CCNA Certification 07/30/10 ? Mott Community College, Flint, Michigan – Associates Degree - Data Processing and General Education ? Currently studying Japanese

    Read the article

  • SQL SERVER – Free Print Book on SQL Server Joes 2 Pros Kit

    - by pinaldave
    Rick Morelan and I were discussing earlier this month that what we can give back to the community. We believe our books are very much successful and very well received by the community. The five books are a journey from novice to expert. The books have changed many lives and helped many get jobs as well pass the SQL Certifications. Rick is from Seattle, USA and I am from Bangalore, India. There are 12 hours difference between us. We try to do weekly meeting to catch up on various personal and SQL related topics. Here is one of our recent conversations. Rick and Pinal Pinal: Good Morning Rick! Rick: Good Morning…err… Good Evening to you – Pinal! Pinal: Hey Rick, did you read the recent email which I sent you – one of our reader is thanking us for writing Joes 2 Pros series. He wants to dedicate his success to us. Can you believe it? Rick: Yeah, he is very kind but did you tell him that it is all because of his hard work on learning subject and we have very little contribution in his success. Pinal: Absolutely, I told him the same – I said we just wrote the book but it is he who learned from it and proved himself in his job. It is all him! We were just igniters. Rick: Good response. Pinal: Hey Rick! Are we doing enough for the community? What can we do more? Rick: Hmmm… Let us do something more. Pinal: Remember once we discussed the idea of if anyone who buys our Joes 2 Pros Combo Kit in the next 2 weeks – we will send them SQL Wait Stats for free. What do you say? Rick: I agree! Great Idea! Let us do it. Free Giveaway Well Rick and I liked the idea of doing more. We have decided to give away free SQL Server Wait Stats books to everybody who will purchase Joes 2 Pros Combo Kit between today (Oct 15, 2012) and Oct 26, 2012. This is not a contest or a lucky winner opportunity. Everybody who participates will qualify for it. Combo Availability USA – Amazon India - Flipkart | Indiaplaza Note1: USA kit contains FREE 5 DVDs. India Kit does not contain 5 DVDs due to legal issues. Note2: Indian Kit is priced at special Indian Economic Price. Qualify for Free Giveaway You must have purchased our Joes 2 Pros Combo Kit of 5 books between Oct 15, 2012 and Oct 26, 2012. Purchase before Oct 15, 2012 and after Oct 26, 2012 will not qualify for this giveaway. Send your original receipt (email, order details) to following addresses: “[email protected];[email protected]” with the subject line “Joes 2 Pros Kit Promotion Free Offer”. Do not change the subject line or your email may be missed.  Clearly mention your shipping address with phone number and pin/zip code. Send your receipt before Oct 30, 2012. We will not entertain any conversation after Oct 30, 2012 cut off date. The Free books will be sent to USA and India address only. Availability USA - Amazon | India - Flipkart | Indiaplaza Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLServer, T SQL, Technology

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 22 (sys.dm_db_index_physical_stats)

    - by Tamarick Hill
    The sys.dm_db_index_physical_stats Dynamic Management Function is used to return information about the fragmentation levels, page counts, depth, number of levels, record counts, etc. about the indexes on your database instance. One row is returned for each level in a given index, which we will discuss more later. The function takes a total of 5 input parameters which are (1) database_id, (2) object_id, (3) index_id, (4) partition_number, and (5) the mode of the scan level that you would like to run. Let’s use this function with our AdventureWorks2012 database to better illustrate the information it provides. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, NULL) As you can see from the result set, there is a lot of beneficial information returned from this DMF. The first couple of columns in the result set (database_id, object_id, index_id, partition_number, index_type_desc, alloc_unit_type_desc) are either self-explanatory or have been explained in our previous blog sessions so I will not go into detail about these at this time. The next column in the result set is the index_depth which represents how deep the index goes. For example, If we have a large index that contains 1 root page, 3 intermediate levels, and 1 leaf level, our index depth would be 5. The next column is the index_level which refers to what level (of the depth) a particular row is referring to. Next is probably one of the most beneficial columns in this result set, which is the avg_fragmentation_in_percent. This column shows you how fragmented a particular level of an index may be. Many people use this column within their index maintenance jobs to dynamically determine whether they should do REORG’s or full REBUILD’s of a given index. The fragment count represents the number of fragments in a leaf level while the avg_fragment_size_in_pages represents the number of pages in a fragment. The page_count column tells you how many pages are in a particular index level. From my result set above, you see the the remaining columns all have NULL values. This is because I did not specify a ‘mode’ in my query and as a result it used the ‘LIMITED’ mode by default. The LIMITED mode is meant to be lightweight so it does collect information for every column in the result set. I will re-run my query again using the ‘DETAILED’ mode and you will see we now have results for these rows. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, ‘DETAILED’)   From the remaining columns, you see we get even more detailed information such as how many records are in a particular index level (record_count). We have a column for ghost_record_count which represents the number of records that have been marked for deletion, but have not physically been removed by the background ghost cleanup process. We later see information on the MIN, MAX, and AVG record size in bytes. The forwarded_record_count column refers to records that have been updated and now no longer fit within the row on the page anymore and thus have to be moved. A forwarded record is left in the original location with a pointer to the new location. The last column in the result set is the compressed_page_count column which tells you how many pages in your index have been compressed. This is a very powerful DMF that returns good information about the current indexes in your system. However, based on the mode you select, it could be a very resource intensive function so be careful with how you use it. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188917.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Enable Full Screen Mode in Media Center Without Trapping the Mouse

    - by DigitalGeekery
    If you have a dual monitor setup and use Windows Media Center, you’re probably aware that when WMC is in full screen mode, it traps the mouse so you can’t work on a second monitor. Here we look at how to solve the annoyance. The Maxifier is an application that allows you to open Media Center in full screen mode without restricting the mouse. It relieves the annoyance of WMC capturing your mouse on a dual monitor setup. Note: If you don’t have two monitors attached, most of The Maxifier’s functions won’t work. Installation and Use Download, extract, and install The Maxifier. (See the download link below) The Maxifier runs minimized in the system tray and you access the options by right-clicking on the icon. If Media Center is not already open, you can choose Start Media Center to start WMC on the main start screen. Or, choose one of the other selections to open another area of Media Center. By default, Maxifier opens Media Center in full screen mode on the secondary monitor. When Media Center is open in full screen mode, you’ll notice you can now freely move your mouse around your multi-monitor setup. When Media Center is open, you’ll see five additional options. The Fit Screen options simply fits Media Center to the full screen, but still show the Windows borders. Full screen options put WMC in full screen mode.   The Maxifier Options allow you to choose from the various start up options. Selecting Watch for Media Center starting will prompt Maxifier to open WMC to the main start page in full screen mode on the secondary monitor automatically, even if you open Media Center without using The Maxifier.  (You may need to restart for this to take effect) If you have more than 2 monitors, you can define on which monitor to open Media Center, and which monitor you consider to be the main screen.   You can also define a number of Hotkeys in The Maxifier settings. First, select the Enable Hotkeys checkbox. To create a Hotkey, click in the text field and then press the keys to use as the Hotkey. To remove a Hotkey, click in the field and press the Delete key.   Conclusion The Maxifier is a simple program that enables Media Center users to take full advantage of a multi-monitor workspace. It works with both Vista and Windows 7. Version 1.4 is a stable application for Vista, and Version 1.5b is a beta application for Windows 7. Looking for more Media Center tips and tweaks? Check out some startup customizations for Windows 7 Media Center, how to automatically mount and view ISO’s in WMC, and how to add background images and themes to Windows 7 Media Center. Link Download the Maxifier Similar Articles Productive Geek Tips Startup Customizations for Media Center in Windows 7Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Lock The Screen While in Full-Screen Mode in Windows Media PlayerSwitch Windows by Hovering the Mouse Over a Window in Windows 7 or VistaIntegrate Boxee with Media Center in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Steve Jobs’ iPhone 4 Keynote Video Watch World Cup Online On These Sites Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites

    Read the article

  • Failure Sucks, But Does It Have To?

    - by steve.diamond
    Hey Folks--It's "elephant in the room" time. Imagine a representative from a CRM VENDOR discussing CRM FAILURES. Well. I recently saw this blog post from Michael Krigsman on "six ways CRM projects go wrong." Now, I know this may come off defensive, but my comments apply to ALL CRM vendors, not just Oracle. As I perused the list, I couldn't find any failures related to technology. They all seemed related to people or process. Now, this isn't about finger pointing, or impugning customers. I love customers! And when they fail, WE fail. Although I sit in the cheap seats, i.e., I haven't funded any multi-million dollar CRM initiatives lately, I kept wondering how to convert the perception of failure as something that ends and is never to be mentioned again (see Michael's reason #4), to something that one learns from and builds upon. So to continue my tradition of speaking in platitudes, let me propose the following three tenets: 1) Try and get ahead of your failures while they're very very small. 2) Immediately assess what you can learn from those failures. 3) With more than 15 years of CRM deployments, seek out those vendors that have a track record both in learning from "misses" and in supporting MANY THOUSANDS of CRM successes at companies of all types and sizes. Now let me digress briefly with an unpleasant (for me, anyway) analogy. I really don't like flying. Call it 'fear of dying' or 'fear of no control.' Whatever! I've spoken with quite a few commercial pilots over the years, and they reassure me that there are multiple failures on most every flight. We as passengers just don't know about them. Most of them are too miniscule to make a difference, and most of them are "caught" before they become LARGER failures. It's typically the mid-sized to colossal failures we hear about, and a significant percentage of those are due to human error. What's the point? I'd propose that organizations consider the topic of FAILURE in five grades. On one end, FAILURE Grade 1 is a minor/miniscule failure. On the other end, FAILURE Grade 5 is a colossal failure A Grade 1 CRM FAILURE could be that a particular interim milestone was missed. Why? What can we learn from that? How can we prevent that from happening as we proceed through the project? Individual organizations will need to define their own Grade 2 and Grade 3 failures. The opportunity is to keep those Grade 3 failures from escalating any further. Because honestly, a GRADE 5 failure may not be recoverable. It could result in a project being pulled, countless amounts of hours and dollars lost, and jobs lost. We don't want to go there. In closing, I want to thank Michael for opening my eyes up to the world of "color," versus thinking of failure as both "black and white" and a dead end road that organizations can't learn from and avoid discussing like the plague.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >