Search Results

Search found 12039 results on 482 pages for 'job searching'.

Page 104/482 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • World Backup Day

    - by red(at)work
    Here at Red Gate Towers, the SQL Backup development team have been hunkered down in their shed for the last few months, with the toolbox, blowtorch and chamois leather out, upgrading SQL Backup. When we started, autumn leaves were falling. Now we're about to finish, spring flowers are budding. If not quite a gleaming new machine, at the very least a familiar, reliable engine with some shiny new bits on it will trundle magnificently out of the workshop. One of the interesting things I've noticed about working on software development teams is that the team is together for so long 'implementing' stuff - designing, coding, testing, fixing bugs and so on - that you occasionally forget why you're doing what you're doing. Doubt creeps in. It feels like a long time since we launched this project in a fanfare of optimism and enthusiasm, and all that clarity of purpose and mission "yee-haw" has dissipated with the daily pressures of development. Every now and again, we look up from our bunker and notice all those thousands of users out there, with their different configurations and working practices and each with their own set of problems and requirements, and we ask ourselves "does anyone care about what we're doing?" Has the world moved on while we've been busy? Could we have been doing something more useful with the time and talent of all these excellent people we've assembled? In truth, you can research and test and validate all you like, but you never really know if you've done the right thing (or at least, something valuable for some users) until you release. All projects suffer this insecurity. If they don't, maybe you're not worrying enough about what you're building. The two enemies of software development are certainty and complacency. Oh, and of course, rival teams with Nerf guns. The goal of SQL Backup 7 is to make it so easy to schedule regular restores of your backups that you have no excuse not to. Why schedule a restore? Because your data is not as good as your last backup. It's only as good as your last successful restore. If you're not checking your backups by restoring them and running an integrity check on the database, you're only doing half the job. It seems that most DBAs know that this is best practice, but it can be tricky and time-consuming to set up, so it's one of those tasks that can get forgotten in the midst all the other demands on their time. Sometimes, they're just too busy firefighting. But if it was simple to do? That was our inspiration for SQL Backup 7. So it was heartening to read Brent Ozar's blog post the other day about World Backup Day. To be honest, I'd never heard of World Backup Day (Talk Like a Pirate Day, yes, but not this one); however, its emphasis on not just backing up your data but checking the validity of those backups was exactly the same message we had in mind when building SQL Backup 7. It's printed on a piece of A3 above our planning board - "Make backup verification so easy to do that no DBA has an excuse for not doing it" It's the missing piece that completes the puzzle. Simple idea, great concept, useful feature, but, as it turned out, far from straightforward to implement. The problem is the future. As Marty McFly discovered over the course of three movies, the future is uncertain and hard to predict - so when you are scheduling a restore to take place an hour, day, week or month after the backup, there are all kinds of questions that you wouldn't normally have to consider. Where will this backup live? Will it even exist at the time? Will it be split into multiple files? What will the file names be? Will it be encrypted? What files should it be restored to? SQL Backup needs to know what to expect at the time the restore job is actually run. Of course, a DBA will know the answer to all these questions, but to deliver the whole point of version 7, we wanted to make it easy for them to input that information into SQL Backup. We think we've done that. When you create your scheduled backup job, there is now an option to create a "reminder" to follow it up with a scheduled restore to verify the resulting backups. Actually, it's much more than a reminder, as it stores all the relevant data so you can click it and pre-populate the wizard with all the right settings to set up your verification restores. Simple. But, what do you think? We'd love you to try it. Post by Brian Harris

    Read the article

  • AWStats is processing log files but does not display them

    - by Wouter
    I've setup AWStats on my VPS to get some more insight into the traffic coming to my site. As instructed I ran a manual build/update which ran fine: sudo -u www-data ./awstats.pl -config=xxxx.com Create/Update database for config "/etc/awstats/awstats.xxxx.com.conf" by AWStats version 6.9 (build 1.925) From data in log file "/usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Phase 2 : Now process new records (Flush history on disk after 20000 hosts)... Warning: awstats has detected that some hosts names were already resolved in your logfile /usr/share/doc/awstats/examples/logresolvemerge.pl /var/www/xxxx.com/logs/*-access.log |. If DNS lookup was already made by the logger (web server), you should change your setup DNSLookup=1 into DNSLookup=0 to increase awstats speed. Jumped lines in file: 0 Parsed lines in file: 814 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 814 new qualified records. It also produced the file in the DatDir: /var/lib/awstats/awstats052010.xxxx.com.txt which contains what I would expect. BUT when I visit: xxxx.com/awstats/awstats.pl it tells me Last Update: Never updated (See 'Build/Update' on awstats_setup.html page) and the rest of the page is blank. I'm pretty sure I set it up correctly but now I cannot figure out why this is happening. Hopefully someone smarter then me can help me. Thank you in advanced.

    Read the article

  • Python easy_install confused on Mac OS X

    - by slf
    environment info: $ echo $PATH /opt/local/bin:/opt/local/sbin:/sw/bin:/sw/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/X11R6/bin:/opt/local/bin:/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:~/.utility_scripts $ which easy_install /usr/bin/easy_install specifically, let's try the simplejson module (I know it's the same thing as import json in 2.6, but that isn't the point) $ sudo easy_install simplejson Searching for simplejson Reading http://pypi.python.org/simple/simplejson/ Reading http://undefined.org/python/#simplejson Best match: simplejson 2.1.0 Downloading http://pypi.python.org/packages/source/s/simplejson/simplejson-2.1.0.tar.gz#md5=3ea565fd1216462162c6929b264cf365 Processing simplejson-2.1.0.tar.gz Running simplejson-2.1.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-Ojv_yS/simplejson-2.1.0/egg-dist-tmp-AypFWa The required version of setuptools (>=0.6c11) is not available, and can't be installed while this script is running. Please install a more recent version first, using 'easy_install -U setuptools'. (Currently using setuptools 0.6c9 (/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python)) error: Setup script exited with 2 ok, so I'll update setuptools... $ sudo easy_install -U setuptools Searching for setuptools Reading http://pypi.python.org/simple/setuptools/ Best match: setuptools 0.6c11 Processing setuptools-0.6c11-py2.6.egg setuptools 0.6c11 is already the active version in easy-install.pth Installing easy_install script to /usr/local/bin Installing easy_install-2.6 script to /usr/local/bin Using /Library/Python/2.6/site-packages/setuptools-0.6c11-py2.6.egg Processing dependencies for setuptools Finished processing dependencies for setuptools I'm not going to speculate, but this could have been caused by any number of environment changes like the Leopard - Snow Leopard upgrade, MacPorts or Fink updates, or multiple Google App Engine updates.

    Read the article

  • Alternatives to Splunk?

    - by MichaelGG
    I'm pretty impressed with Splunk, especially version 4. Pretty graphs, alerting (Enterprise only), and fast, accurate, searching. It's a great product. However, the cost just way too high to consider for full production use for our company. All we really need is to be able to index different logs in a central place, and have reasonable searching on that. Having alerts based on a saved search is also really nice. We don't really go beyond that. In fact, our biggest usage has been in deploying new applications. Everything gets logged via log4net to either the Event log on Windows or a text file on Linux. Splunk makes it pretty easy to quickly search across those to make sure all the parts of the app are working ok -- that's saved us tons of time versus hunting down individual logging sources. What alternatives exist in this market? I have a sinking feeling Splunk's pricing is so high because they have the best product by far, and they know it. We want the server to run on Windows. I'd be open to a split model, using one product for general logs (collect via syslog/Snare), and a dedicated product for our custom apps (like Log4Net Dashboard). Would using a simple syslog server such as Kiwi, sent to SQL Server (perhaps with fulltext enabled) work? I'd hope the cost should be well under 5 figures, USD. (And yes, I know, we're cheap. We're a startup with little money, and BizSpark takes care of all our MS licensing.) Edit: I should add, we have about 10 physical servers, 20 VMs, and a couple firewalls and switches. 90% is Windows.

    Read the article

  • Large mailbox in Outlook 2007 takes ages to index

    - by Reado
    In our company each user has a single mailbox and all email they have ever sent/received is in that mailbox. We don't do archiving to PST and we thought that was the way forward. The problem we now have is if someone switches to another PC for the day and opens Outlook, it has to download all emails first to that PC (cached mode) but even then when they try to search for something, Outlook says items are still being indexed. One user has over 100,000 items to be indexed and it's been saying that for about a week! As a temporary workaround I have turned off instant searching which allows them to search for anything, but it takes time to filter through, and Outlook doesn't exactly indicate if it's still searching for something, so in most cases the user thinks the search isn't working when really it is and it's just taking time to populate the results. I need a solution that allows the mailbox to be indexed really quickly if the user has to login to another PC. Are we best using Online Mode instead of Cached Mode or is there another way around this? Thanks in advance.

    Read the article

  • Fix Corrupted Ruby in Mac OS X Lion

    - by luckyb56
    I screwed up my ruby buy executing the command sudo easy_install pip> /usr/bin/ruby -e "$(/usr/bin/curl -fksSL https://raw.github.com/mxcl/homebrew/master/Library/Contributions/install_homebrew.rb)" It showed error: Couldn't find index page for '-e' (maybe misspelled?) No local packages or download links found for -e error: Could not find suitable distribution for Requirement.parse('-e') After that when I tried to install Brew by: /usr/bin/ruby -e "$(/usr/bin/curl -fksSL https://raw.github.com/mxcl/homebrew/master/Library/Contributions/install_homebrew.rb)" It shows error which I have no idea: /usr/bin/ruby: line 1: Searching: command not found /usr/bin/ruby: line 2: Best: command not found /usr/bin/ruby: line 3: Processing: command not found Usage: pip COMMAND [OPTIONS] pip: error: No command by the name pip 1.1 (maybe you meant "pip install 1.1") /usr/bin/ruby: line 5: Installing: command not found /usr/bin/ruby: line 6: Installing: command not found /usr/bin/ruby: line 8: Using: command not found /usr/bin/ruby: line 9: Processing: command not found /usr/bin/ruby: line 10: Finished: command not found /usr/bin/ruby: line 11: Searching: command not found /usr/bin/ruby: line 12: Reading: command not found /usr/bin/ruby: line 13: syntax error near unexpected token `(' /usr/bin/ruby: line 13: `Scanning index of all packages (this may take a while)' Can this be fixed?

    Read the article

  • Need Info on the Hidden Switch in SET - "/S" How to implement

    - by ttyl
    I am having some problems doing a proper search of "SET/S" or "SET /S" on google and other search providers. The difficulty arises with the SLASH "/", it is commonly used in search engines to add a "nearness" to the search parameter. I have found no way to escape the SLASH when searching for a SLASH. For those on this community, try searching this domain with the two search terms listed above. It just doesn't work, it ends up looking for SET S instead. But I digress. So Im asking the uber-guru's on this board to help me find out about the documentation of /S and how to implement SET /S in a batch file. SET is an internal DOS/cmd commandand allows many things incuding prompting the user, integer math and writing environment strings. in looking at this link: http://www.robvanderwoude.com/os2set.php it appears that the /S is only for OS2 but im thinking that this might not be the case, due to this: http://www.dostips.com/forum/viewtopic.php?t=2704, apparently used with substings and macros. any help is much appreciated

    Read the article

  • Using Windows Explorer, how to find file names starting with a dot (period), in 7 or Vista?

    - by Chris W. Rea
    I've got a MacBook laptop in the house, and when Mac OS X copies files over the network, it often brings along hidden "dot-files" with it. For instance, if I copy "SomeUtility.zip", there will also be copied a hidden ".SomeUtility.zip" file. I consider these OS X dot-files as useless turds of data as far as the rest of my network is concerned, and don't want to leave them on my Windows file server. Let's assume these dot-files will continue to happen. i.e. Think of the issue of getting OS X to stop creating those files, in the first place, to be another question altogether. Rather: How can I use Windows Explorer to find files that begin with a dot / period? I'd like to periodically search my file server and blow them away. I tried searching for files matching ".*" but that yielded – and not unexpectedly – all files and folders. Is there a way to enter more specific search criteria when searching in Windows Explorer? I'm referring to the search box that appears in the upper-right corner of an Explorer window. Please tell me there is a way to escape my query to do what I want? (Failing that, I know I can map a drive letter and drop into a cygwin prompt and use the UNIX 'find' command, but I'd prefer a shiny easy way.)

    Read the article

  • How to work around blocked outbound hkp port for apt keys

    - by kief_morris
    I'm using Ubuntu 9.10, and need to add some apt repositories. Unfortunately, I get messages like this when running sudo apt-get update: W: GPG error: http://ppa.launchpad.net karmic Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 5A9BF3BB4E5E17B5 W: GPG error: http://ppa.launchpad.net karmic Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 1DABDBB4CEC06767 So, I need to install the keys for these repositories. Under 9.10 we now have the option to do this: sudo add-apt-repository ppa:nvidia-vdpau/ppa See this Ubuntu help article for details. This is great, except that I'm running this on a workstation behind a firewall which blocks outbound connections to pretty much all ports except those required by secretaries running Windows and IE. The port in question here is the hkp service, port 11371. There appear to be ways to manually download keys and install them on apt's keyring. There may even be a way to use add-apt-repository or wget or something to download a key from an alternative server making it available on port 80. However, I haven't yet found a concise set of steps for doing so. What I'm looking for is: How to find a public key for an apt-package (recommendations for resources which have these, and/or tips for searching. Searching for the key hash doesn't seem all that effective so far.) How to retrieve a key (can it be done automatically using gpg or add-apt-repository?) How to add a key to apt's keyring Thanks in advance.

    Read the article

  • Using Varnish (only) for DDoS mitigation

    - by Martin Kanters
    My VPS is suffering from a (D)DoS doing a SYN flood with spoofed IPs. I'm right now searching from ways how to be able to defend (at least a bit) against it. It's running a DirectAdmin apache2 webserver. Mainly used for serving PHP and MySQL. We are using CloudFlare, which are saying that they are able to mitigate (D)DoS at some level, now the attacker knows our real IP address, so CloudFlare isn't helping a bit. I've done some searching on the net and found out about enabling SYN cookies, to defend against it. I've checked my settings and it seems it was enabled all along. I've also read about that Varnish is able to defend against SYN flooding and Slowloris attacks, now I'm pretty interested in using that. The thing is that CloudFlare is already caching a lot from us, and I don't wish to spend too much resources on Varnish. Is it possible and smart to set up Varnish only for the better handling of requests? Are there perhaps better ways which I've missed? Thanks in advance, Martin

    Read the article

  • Windows Media Center doesn't see my movies

    - by DrJekyll
    I am trying to configure my Windows Media Center (Windows 7 Ultimate). I selected folder with my movies and added it to the library, but when I went to the movies library, it says "There are no items in this library yet - Windows Media Center is searching for media files in the background...". I have all necessary codecs installed, Windows Media Player opens those movies correctly. When I right click on the file - Open with - Windows Media Center it also plays them without any problem. Any ideas why they don't appear in the libraries? Edit: Movies are coded with divx and xvid codecs and they have ".avi" extension. Windows doesn't have problems playing them. I told Media Center where the files are. I even pointed Windows Media Center to a folder with only one .avi file it still couldn't find anything there. (I have given it quiet some time, even though searching in the directory with only one file shouldn't take more than a few seconds.) When I add a folder with a lot of movies, I get a dialog box "You can wait while media is added or select OK to continue using Windows Media Center.".                                                                        At the end it says it added about 90 movies, but when I go to the libraries, it's still empty.

    Read the article

  • How do I identify Blackberry / OWA users in my IIS logs?

    - by Quinten
    We just rolled out a Blackberry Express Server, and would like to make sure that all Blackberry devices that our users own are connecting SOLELY through the BES server. We are running Exchange 2010 SP1. I've read some links that discuss blocking BIS at the firewall level. Before doing that, however, I'd like to individually contact all users with Blackberries and make sure that they have a chance to switch to the BES server. I've sent a company-wide email, but unsurprisingly folks tend to tune these out until they are forced into action. Is there an easy way to identify the users with Blackberries by searching IIS logs, or perhaps using the Exchange Management Shell? Especially some automated way? I've tried searching for the Blackberry identifier, but it does not appear next to any user name, so it's not as helpful as it could be. Edit: to clarify, what I'm talking about is the fact that Blackberries can use OWA to download mail to the phone. We do not allow IMAP or POP access through our firewall so that's not a concern--just folks with Blackberries using Blackberry's hack to allow it to connect to Exchange without a BES server. As far as I know, Blackberries are the only popular phones that use this method to download mail.

    Read the article

  • Need Help getting perl module DBD::mysql installed for bugzilla on RedHat.

    - by Alos Diallo
    Hi everyone I am having some issues getting Bugzilla setup, I have the software on the server and am trying to get the pre-rec's setup. I am using RedHat 4.1.2-42. I have all of the required perl modules save one:DBD::mysql When I try: sudo perl install-module.pl DBD::mysql I get the following response(this is only an excerpt): rm -f blib/arch/auto/DBD/mysql/mysql.so LD_RUN_PATH="/usr/lib64/mysql:/usr/lib64:/lib64" /usr/bin/perl myld gcc -shared -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic dbdimp.o mysql.o -o blib/arch/auto/DBD/mysql/mysql.so \ -L/usr/lib64/mysql -lmysqlclient -lz -lcrypt -lnsl -lm -L/usr/lib64 -lssl -lcrypto \ /usr/bin/ld: skipping incompatible /usr/lib/libssl.so when searching for -lssl /usr/bin/ld: skipping incompatible /usr/lib/libssl.a when searching for -lssl /usr/bin/ld: cannot find -lssl collect2: ld returned 1 exit status make: * [blib/arch/auto/DBD/mysql/mysql.so] Error 1 /usr/bin/make -- NOT OK Running make test Can't test without successful make Running make install make had returned bad status, install seems impossible I then tried the following: CFLAGS="-I/usr/lib64/mysql:/usr/lib64:/lib64" perl install-module.pl DBD::mysql I get the same result. I have also tried to install it using CPAN but also get the same result. Right now I have DBD-mysql v3.0007 but need (v4.00) Also when I try to install open ssl it says I have the latest version. Does anyone know what I have to do to get this to work? Any help would be greatly appreciated. Thank you

    Read the article

  • Missing access log for virtual host on Plesk

    - by Cummander Checkov
    For some reason i don't understand, after creating a new virtual host / domain in Plesk a few months back, i cannot seem to find the access log. I noticed this when running /usr/local/psa/admin/sbin/statistics The host in question is being scanned Main HTML page is 'awstats.<hostname_masked>-http.html'. Create/Update database for config "/opt/psa/etc/awstats/awstats.<hostname_masked>.com-https.conf" by AWStats version 6.95 (build 1.943) From data in log file "-"... Phase 1 : First bypass old records, searching new record... Searching new records from beginning of log file... Jumped lines in file: 0 Parsed lines in file: 0 Found 0 dropped records, Found 0 corrupted records, Found 0 old records, Found 0 new qualified records. So basically no access logs have been parsed/found. I then went on to check if i could find the log myself. I looked in /var/www/vhosts/<hostname_masked>.com/statistics/logs but all i find is error_log Does anybody know what is wrong here and perhaps how i could fix this? Note: in the <hostname_masked>.com/conf/ folder i keep a custom vhost.conf file, which however contains only some rewrite conditions plus a directory statement that contains php_admin_flag and php_admin_value settings. None of them are related to logging though.

    Read the article

  • Using Windows Explorer, how to find file names starting with a dot (period), in 7 or Vista?

    - by Chris W. Rea
    I've got a MacBook laptop in the house, and when Mac OS X copies files over the network, it often brings along hidden "dot-files" with it. For instance, if I copy "SomeUtility.zip", there will also be copied a hidden ".SomeUtility.zip" file. I consider these OS X dot-files as useless turds of data as far as the rest of my network is concerned, and don't want to leave them on my Windows file server. Let's assume these dot-files will continue to happen. i.e. Think of the issue of getting OS X to stop creating those files, in the first place, to be another question altogether. Rather: How can I use Windows Explorer to find files that begin with a dot / period? I'd like to periodically search my file server and blow them away. I tried searching for files matching ".*" but that yielded – and not unexpectedly – all files and folders. Is there a way to enter more specific search criteria when searching in Windows Explorer? I'm referring to the search box that appears in the upper-right corner of an Explorer window. Please tell me there is a way to escape my query to do what I want? (Failing that, I know I can map a drive letter and drop into a cygwin prompt and use the UNIX 'find' command, but I'd prefer a shiny easy way.)

    Read the article

  • Cannot install windows. Compaq Presario CQ62

    - by Matthew
    I bought a used Compaq Presario CQ62 for cheap, and went to install windows on it. I formatted the partition and went to install when I got this error.... Windows cannot install required files. The file may be corrupt or missing. Make sure all files required for installation are available and restart the installation. Error code: 0x80070017 I have used this disk before with no problems, but internet searching suggested I burn one at 2x speed because that helps for some reason... I'm burning one now, but my question is, why would I get this error, OTHER than the disc being bad? I'm pretty certain this one isn't as I have used it before... (ok so the slowly burned cd (using imgburn) didn't work either so it's DEFINITELY not the disc) Thanks in advanced for any answers Also I took one stick of ram out because internet searching also suggested that, but it didn't make a difference. Also I ran memory and hard drive checks and they passed fine. Also I reset the motherboard options to default What could it be!? Help I'm completely stumped...

    Read the article

  • Windows XP to remote server 2008 R2 shares - awful response times

    - by nick3216
    I have a network infrastructure of Windows XP clients (a mix of XP and 64-bit XP), that are accessing a network share on a Windows 2008 R2 server. Whenever users type the address of a folder into the address bar of Windows Explorer it's as snappy at determining the contents of the current folder and presenting them to you in the address bar as if you're working on a local drive. But if you open one of the subfolders users get the animated red torch and 'Searching for items...' dialog, typically for 45 seconds. Similarly when using the open folder dialog to try and select a subfolder on this share it takes, on average, 45 seconds for the dialog to expand each node and show the subfolders of each node. Also, while the Explorer instance accsesing the network share is running slowly users notice that the performance of all other Explorer windows suffers. So while Explorer is searching for files on the network share they can't switch to another task and navigate around their local drive using Explorer because it's now as slow as a dead dog at accessing anything. Are there any settings we can change which will improve the performance accessing network shares?

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • Is Social Media The Vital Skill You Aren’t Tracking?

    - by HCM-Oracle
    By Mark Bennett - Originally featured in Talent Management Excellence The ever-increasing presence of the workforce on social media presents opportunities as well as risks for organizations. While on the one hand, we read about social media embarrassments happening to organizations, on the other we see that social media activities by workers and candidates can enhance a company’s brand and provide insight into what individuals are, or can become, influencers in the social media sphere. HR can play a key role in helping organizations make the most value out of the activities and presence of workers and candidates, while at the same time also helping to manage the risks that come with the permanence and viral nature of social media. What is Missing from Understanding Our Workforce? “If only HP knew what HP knows, we would be three-times more productive.”  Lew Platt, Former Chairman, President, CEO, Hewlett-Packard  What Lew Platt recognized was that organizations only have a partial understanding of what their workforce is capable of. This lack of understanding impacts the company in several negative ways: 1. A particular skill that the company needs to access in one part of the organization might exist somewhere else, but there is no record that the skill exists, so the need is unfulfilled. 2. As market conditions change rapidly, the company needs to know strategic options, but some options are missed entirely because the company doesn’t know that sufficient capability already exists to enable those options. 3. Employees may miss out on opportunities to demonstrate how their hidden skills could create new value to the company. Why don’t companies have that more complete picture of their workforce capabilities – that is, not know what they know? One very good explanation is that companies put most of their efforts into rating their workforce according to the jobs and roles they are filling today. This is the essence of two important talent management processes: recruiting and performance appraisals.  In recruiting, a set of requirements is put together for a job, either explicitly or indirectly through a job description. During the recruiting process, much of the attention is paid towards whether the candidate has the qualifications, the skills, the experience and the cultural fit to be successful in the role. This makes a lot of sense.  In the performance appraisal process, an employee is measured on how well they performed the functions of their role and in an effort to help the employee do even better next time, they are also measured on proficiency in the competencies that are deemed to be key in doing that job. Again, the logic is impeccable.  But in both these cases, two adages come to mind: 1. What gets measured is what gets managed. 2. You only see what you are looking for. In other words, the fact that the current roles the workforce are performing are the basis for measuring which capabilities the workforce has, makes them the only capabilities to be measured. What was initially meant to be a positive, i.e. identify what is needed to perform well and measure it, in order that it can be managed, comes with the unintended negative consequence of overshadowing the other capabilities the workforce has. This also comes with an employee engagement price, for the measurements and management of workforce capabilities is to typically focus on where the workforce comes up short. Again, it makes sense to do this, since improving a capability that appears to result in improved performance benefits, both the individual through improved performance ratings and the company through improved productivity. But this is based on the assumption that the capabilities identified and their required proficiencies are the only attributes of the individual that matter. Anything else the individual brings that results in high performance, while resulting in a desired performance outcome, often goes unrecognized or underappreciated at best. As social media begins to occupy a more important part in current and future roles in organizations, businesses must incorporate social media savvy and innovation into job descriptions and expectations. These new measures could provide insight into how well someone can use social media tools to influence communities and decision makers; keep abreast of trends in fast-moving industries; present a positive brand image for the organization around thought leadership, customer focus, social responsibility; and coordinate and collaborate with partners. These measures should demonstrate the “social capital” the individual has invested in and developed over time. Without this dimension, “short cut” methods may generate a narrow set of positive metrics that do not have real, long-lasting benefits to the organization. How Workforce Reputation Management Helps HR Harness Social Media With hundreds of petabytes of social media data flowing across Facebook, LinkedIn and Twitter, businesses are tapping technology solutions to effectively leverage social for HR. Workforce reputation management technology helps organizations discover, mobilize and retain talent by providing insight into the social reputation and influence of the workforce while also helping organizations monitor employee social media policy compliance and mitigate social media risk.  There are three major ways that workforce reputation management technology can play a strategic role to support HR: 1. Improve Awareness and Decisions on Talent Many organizations measure the skills and competencies that they know they need today, but are unaware of what other skills and competencies their workforce has that could be essential tomorrow. How about whether your workforce has the reputation and influence to make their skills and competencies more effective? Many organizations don’t have insight into the social media “reach” their workforce has, which is becoming more critical to business performance. These features help organizations, managers, and employees improve many talent processes and decision making, including the following: Hiring and Assignments. People and teams with higher reputations are considered more valuable and effective workers. Someone with high reputation who refers a candidate also can have high credibility as a source for hires.   Training and Development. Reputation trend analysis can impact program decisions regarding training offerings by showing how reputation and influence across the workforce changes in concert with training. Worker reputation impacts development plans and goal choices by helping the individual see which development efforts result in improved reputation and influence.   Finding Hidden Talent. Managers can discover hidden talent and skills amongst employees based on a combination of social profile information and social media reputation. Employees can improve their personal brand and accelerate their career development.  2. Talent Search and Discovery The right technology helps organizations find information on people that might otherwise be hidden. By leveraging access to candidate and worker social profiles as well as their social relationships, workforce reputation management provides companies with a more complete picture of what their knowledge, skills, and attributes are and what they can in turn access. This more complete information helps to find the right talent both outside the organization as well as the right, perhaps previously hidden talent, within the organization to fill roles and staff projects, particularly those roles and projects that are required in reaction to fast-changing opportunities and circumstances. 3. Reputation Brings Credibility Workforce reputation management technology provides a clearer picture of how candidates and workers are viewed by their peers and communities across a wide range of social reputation and influence metrics. This information is less subject to individual bias and can impact critical decision-making. Knowing the individual’s reputation and influence enables the organization to predict how well their capabilities and behaviors will have a positive effect on desired business outcomes. Many roles that have the highest impact on overall business performance are dependent on the individual’s influence and reputation. In addition, reputation and influence measures offer a very tangible source of feedback for workers, providing them with insight that helps them develop themselves and their careers and see the effectiveness of those efforts by tracking changes over time in their reputation and influence. The following are some examples of the different reputation and influence measures of the workforce that Workforce Reputation Management could gather and analyze: Generosity – How often the user reposts other’s posts. Influence – How often the user’s material is reposted by others.  Engagement – The ratio of recent posts with references (e.g. links to other posts) to the total number of posts.  Activity – How frequently the user posts. (e.g. number per day)  Impact – The size of the users’ social networks, which indicates their ability to reach unique followers, friends, or users.   Clout – The number of references and citations of the user’s material in others’ posts.  The Vital Ingredient of Workforce Reputation Management: Employee Participation “Nothing about me, without me.” Valerie Billingham, “Through the Patient’s Eyes”, Salzburg Seminar Session 356, 1998 Since data resides primarily in social media, a question arises: what manner is used to collect that data? While much of social media activity is publicly accessible (as many who wished otherwise have learned to their chagrin), the social norms of social media have developed to put some restrictions on what is acceptable behavior and by whom. Disregarding these norms risks a repercussion firestorm. One of the more recognized norms is that while individuals can follow and engage with other individual’s public social activity (e.g. Twitter updates) fairly freely, the more an organization does this unprompted and without getting permission from the individual beforehand, the more likely the organization risks a totally opposite outcome from the one desired. Instead, the organization must look for permission from the individual, which can be met with resistance. That resistance comes from not knowing how the information will be used, how it will be shared with others, and not receiving enough benefit in return for granting permission. As the quote above about patient concerns and rights succinctly states, no one likes not feeling in control of the information about themselves, or the uncertainty about where it will be used. This is well understood in consumer social media (i.e. permission-based marketing) and is applicable to workforce reputation management. However, asking permission leaves open the very real possibility that no one, or so few, will grant permission, resulting in a small set of data with little usefulness for the company. Connecting Individual Motivation to Organization Needs So what is it that makes an individual decide to grant an organization access to the data it wants? It is when the individual’s own motivations are in alignment with the organization’s objectives. In the case of workforce reputation management, when the individual is motivated by a desire for increased visibility and career growth opportunities to advertise their skills and level of influence and reputation, they are aligned with the organizations’ objectives; to fill resource needs or strategically build better awareness of what skills are present in the workforce, as well as levels of influence and reputation. Individuals can see the benefit of granting access permission to the company through multiple means. One is through simple social awareness; they begin to discover that peers who are getting more career opportunities are those who are signed up for workforce reputation management. Another is where companies take the message directly to the individual; we think you would benefit from signing up with our workforce reputation management solution. Another, more strategic approach is to make reputation management part of a larger Career Development effort by the company; providing a wide set of tools to help the workforce find ways to plan and take action to achieve their career aspirations in the organization. An effective mechanism, that facilitates connecting the visibility and career growth motivations of the workforce with the larger context of the organization’s business objectives, is to use game mechanics to help individuals transform their career goals into concrete, actionable steps, such as signing up for reputation management. This works in favor of companies looking to use workforce reputation because the workforce is more apt to see how it fits into achieving their overall career goals, as well as seeing how other participation brings additional benefits.  Once an individual has signed up with reputation management, not only have they made themselves more visible within the organization and increased their career growth opportunities, they have also enabled a tool that they can use to better understand how their actions and behaviors impact their influence and reputation. Since they will be able to see their reputation and influence measurements change over time, they will gain better insight into how reputation and influence impacts their effectiveness in a role, as well as how their behaviors and skill levels in turn affect their influence and reputation. This insight can trigger much more directed, and effective, efforts by the individual to improve their ability to perform at a higher level and become more productive. The increased sense of autonomy the individual experiences, in linking the insight they gain to the actions and behavior changes they make, greatly enhances their engagement with their role as well as their career prospects within the company. Workforce reputation management takes the wide range of disparate data about the workforce being produced across various social media platforms and transforms it into accessible, relevant, and actionable information that helps the organization achieve its desired business objectives. Social media holds untapped insights about your talent, brand and business, and workforce reputation management can help unlock them. Imagine - if you could find the hidden secrets of your businesses, how much more productive and efficient would your organization be? Mark Bennett is a Director of Product Strategy at Oracle. Mark focuses on setting the strategic vision and direction for tools that help organizations understand, shape, and leverage the capabilities of their workforce to achieve business objectives, as well as help individuals work effectively to achieve their goals and navigate their own growth. His combination of a deep technical background in software design and development, coupled with a broad knowledge of business challenges and thinking in today’s globalized, rapidly changing, technology accelerated economy, has enabled him to identify and incorporate key innovations that are central to Oracle Fusion’s unique value proposition. Mark has over the course of his career been in charge of the design, development, and strategy of Talent Management products and the design and development of cutting edge software that is better equipped to handle the increasingly complex demands of users while also remaining easy to use. Follow him @mpbennett

    Read the article

  • MSCC: Global Windows Azure Bootcamp

    Mauritius participated and contributed to the Global Windows Azure Bootcamp 2014 (GWAB). Again! And this time stronger than ever, and together with 137 other locations in 56 countries world-wide. We had 62 named registrations, 7 guest additions and approximately 10 offline participants prior to the event day. Most interestingly the organisation of the GWAB through the MSCC helped to increased the number of craftsmen. The Mauritius Software Craftsmanship Community has currently 138 registered members - in less than one year! Only with those numbers we can proudly say that all the preparations and hard work towards this event already paid off. Personally, I'm really grateful that we had this kind of response and the feedback from some attendees confirmed that the MSCC is on the right track here on Cyber Island Mauritius. Inspired and motivated by the success of this event, rest assured that there will be more public events like the GWAB. This time it took some time to reflect on our meetup, following my first impression right on spot: "Wow, what an experience to organise and participate in this global event. Overall, I've been very pleased with the preparations and the event itself. Surely, there have been some nicks that we have to address and to improve for future activities like this. Quite frankly, we are not professional event organisers (not yet) but we learned a lot over the past couple of days. A big Thank You to our event sponsors, namely Microsoft Indian Ocean Islands & French Pacific, Ceridian Mauritius and Emtel. Without them this event wouldn't have happened - Thank You! And to the cool team members of Microsoft Student Partners (MSPs). You geeks did a great job! Thanks!" So, how many attendees did we actually have? 61! - Awesome - 61 cloud computing instances to help on the research of diabetes. During Saturday afternoon there was even an online publication on L'Express: Les développeurs mauriciens se joignent au combat contre le diabète Reactions of other attendees Don't take my word for granted... Here are some impressions and feedback from our participants: "Awesome event, really appreciated the presentations :-)" -- Kevin on event comments "very interesting and enriching." -- Diana on event comments "#gwab #gwabmru 2014 great success. Looking forward for gwab 2015" -- Wasiim on Twitter "Was there till the end. Awesome Event. I'll surely join upcoming meetup sessions :)" -- Luchmun on event comments "#gwabmru was not that cool. left early" -- Mohammad on Twitter The overall feedback is positive but we are absolutely aware that there quite a number of problems we had to face. We are already looking into that and ideas / action plans on how we will be able to improve it for future events. The sessions We started the day with welcoming speeches by Thierry Coret, Sr. Marketing Manager of Microsoft Indian Ocean Islands & French Pacific and Vidia Mooneegan, Managing Director and Sr. Vice President of Ceridian Mauritius. The clear emphasis was on the endless possibilities of cloud computing and how it can enable any kind of sectors here in the country. Then it was about time to set up the cloud computing services in order to contribute each attendees cloud computing resources to the global research of diabetes, a step by step guide presented by Arnaud Meslier, Technical Evangelist at Microsoft. Given a rendering package and a configuration file it was very interesting to follow the single steps in Windows Azure. Also, during the day we were not sure whether the set up had been correctly, as Mauritius didn't show up on the results board - which should have been the case after approximately 20 to 30 minutes. Anyways, let the minions work... Next, Arnaud gave a brief overview of the variety of services Windows Azure has to offer. Whether you need a development environment for your websites or mobiles app, running a virtual machine with your existing applications or simply putting a SQL database online. No worries, Windows Azure has the right packages available and the online management portal is really easy t handle. After this, we got a little bit more business oriented while Wasiim Hosenbocus, employee at Ceridian, took the attendees through the inerts of a real-life application, and demoed a couple of the existing features. He did a great job by showing how the different services of Windows Azure can be created and pulled together. After the lunch break it is always tough to keep the audience awake... And it was my turn. I gave a brief overview on operating and managing a SQL database on Windows Azure. Well, there are actually two options available and depending on your individual requirements you should be aware of both. The simpler version is called SQL Database and while provisioning only takes a couple of seconds, you should take into consideration that SQL Database has a number of constraints, like limitations on the actual database size - up to 5 GB as web edition and up to 150 GB maximum as business edition -, among others. Next, it was Chervine Bhiwoo's session on Windows Azure Mobile Services. It was absolutely amazing to see that the mobiles services directly offers you various project templates, like Windows 8 Store App, Android app, iOS app, and even Xamarin cross-platform app development. So, within a couple of minutes you can have your first mobile app active and running on Windows Azure. Furthermore, Chervine showed the attendees that adding another user interface, like Web Sites running on ASP.NET MVC 4 only takes a couple of minutes, too. And last but not least, we rounded up the day with Windows Azure Websites and hosting of Virtual Machines presented by some members of the local Microsoft Students Partners programme. Surely, one of the big advantages using Windows Azure is the availability of pre-defined installation packages of known web applications, like WordPress, Joomla!, or Ghost. Compared to running your own web site with a traditional web hoster it is surely en par, and depending on your personal level of expertise, Windows Azure provides you more liberty in terms of configuration than maybe a shared hosting environment. Running a pre-defined web application is one thing but in case that you would like to have more control over your hosting environment it is highly advised to opt for a virtual machine. Provisioning of an Ubuntu 12.04 LTS system was very simple to do but it takes some more minutes than you might expect. So, please be patient and take your time while Windows Azure gets everything in place for you. Afterwards, you can use a SecureShell (ssh) client like Putty in case of a Linux-based machine, or Remote Desktop Services when operating a Windows Server system to log in into your virtual machine. At the end of the day we had a great Q&A session and we finalised the event with our raffle of goodies. Participation in the raffle was bound to submission of the session survey and most gratefully we had a give-away for everyone. What a nice coincidence to finish of the day. Note: All session slides (and demo codes) will be made available on the MSCC event page. Please, check the Files section over there. (Some) Visual impressions from the event Just to give you an idea about what has happened during the GWAB 2014 at Ebene... Speakers and Microsoft Student Partners are getting ready for the Global Windows Azure Bootcamp 2014 GWAB 2014 attendees are fully integrated into the hands-on-labs and setting up their individuals cloud computing services 60 attendees at the GWAB 2014. Despite some technical difficulties we had a great time running the event GWAB 2014: Using the lunch break for networking and exchange of ideas - Great conversations and topics amongst attendees There are more pictures on the original event page: Questions & Answers Following are a couple of questions which have been asked and didn't get an answer during the event: Q: Is it possible to upload static pages via FTP? A: Yes, you can. Have a look at the right side column on the dashboard of your website. There you'll find information about the FTP and SFTP host names. You can use any FTP client, like ie. FileZilla to log in. FTP also gives you access to your log files. Q: What are the size limitations on SQL Database? A: 5 GB on the Web Edition, and 150 GB on the business edition. A maximum 150 databases (inclusing 'master') per SQL Database server. More details here: General Guidelines and Limitations (Azure SQL Database) Q: What's the maximum size of a SQL Server running in a Virtual Machine? A: The maximum Windows Azure VM has currently 8 CPU cores, 14 or 56 GB of RAM and 16x 1 TB hard drives. More details here: Virtual Machine and Cloud Service Sizes for Azure Q: How can we register for Windows Azure? A: Mauritius is currently not listed for phone verification. Please get in touch with Arnaud Meslier at Microsoft IOI & FP Q: Can I use my own domain name for Windows Azure websites? A: Yes, you can. But this might require to upscale your account to Standard. In case that I missed a question and answer, please use the comment section at the end of the article. Thanks! Final results Every participant was instructed during the hands-on-lab session on how to set up a cloud computing service in their account. Of course, I won't keep the results from you... Global Azure Lab GWAB 2014: Our cloud computing contribution to the research of diabetes And I would say Mauritius did a good job! Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Launch of Microsoft SQL Server 2014 (15.4.2014) Corsair Hackers Reboot (19.4.2014) WebCup (TBA ~ June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! Networking and job/project opportunities Despite having technical presentations on Windows Azure an event like this always offers a great bunch of possibilities and opportunities to get in touch with new people in IT, have an exchange of experience with other like-minded people. As I already wrote about Communities - The importance of exchange and discussion - I had a great conversation with representatives of the University des Mascareignes which are currently embracing cloud infrastructure and cloud computing for their various campuses in the Indian Ocean. As for the MSCC it would be a great experience to stay in touch with them, and to work on upcoming, common activities. Furthermore, I had a very good conversation with Thierry and Ludovic of Microsoft IOI & FP on the necessity of user groups and IT communities here on the island. It's great to see that the MSCC is currently on a run and that local companies are sharing our thoughts on promoting IT careers and exchange of IT knowledge in such an open way. I'm also looking forward to be able to participate and to contribute on more events in the near future. My resume of the day We learned a lot today and there is always room for improvement! It was an awesome event and quite frankly it was a pleasure to spend the day with so many enthuastic IT people in the same room. It was a great experience to organise such event locally and participate on a global scale to support the GlyQ-IQ Technology in their research on diabetes. I was so pleased to see the involvement of new MSCC members in taking the opportunity to share and learn about the power of cloud computing. The Mauritius Software Craftsmanship Community is on the right way and this year's bootcamp on Windows Azure only marked the beginning of our journey. Thank you to our sponsors and my kudos to the MSPs! Update: Media coverage The event has been reported in local media, too. Following are some resources: Orange - Local - Business: Le cloud, pour des recherches approfondies sur le diabète Maurice Info.mu: Le cloud, pour des recherches approfondies sur le diabète Le Quotidien Pg 2: Global Windows Azure Bootcamp 2014 - Le cloud pour des recherches approfondies sur le diabète The Observer Pg 12: Le cloud, pour des recherches approfondies sur le diabète

    Read the article

  • Failed to launch simulated application: iPhone Simulator failed to find the process ID of com.iAndAp

    - by Nicsoft
    Hello, I'm having this annoyning problem giving this message in the console: Failed to launch simulated application: iPhone Simulator failed to find the process ID of com.iAndApp.BlockPop. When trying to Build and Run, the application builds fine. The simulator starts but doesn't start the application. However, it manages to do something since the icon for the app is installed in the simulator. I have been searching to find the answer and tried a couple of things, none that worked. This happens for both new and old projects. I.e. when I create a new project I will receive the same error message. I've tried several projects of which all get the same error, so it's not related to the code (and successful builds proves it). Among other things I have updated to xCode 3.2.2. in order to try to solve the problem. Using Mac OSX 10.6.3. Here are the logs: 1. 2010-05-30 17.20.39 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.20.40 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.20.40 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.20.40 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.20.40 SpringBoard[15713] Can't find the translation dictionary, loadTranslationDictionaries 2010-05-30 17.20.40 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.20.40 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.20.41 SpringBoard[15713] Launchd returned an unexpected type or didn't return a value for job label UIKitApplication:com.iAndApp.BlockPop[0x8abd] with job key PID 2010-05-30 17.20.41 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.21.10 Xcode[15496] Error launching simulated application: Error Domain=DTiPhoneSimulatorErrorDomain Code=1 UserInfo=0x200edcc00 "iPhone Simulator failed to find the process ID of com.iAndApp.BlockPop." 2. Form system.log May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: a0bc84e0 init_simulator_paths: No simulator root specified. Falling back to environment variable. May 30 17:20:39: --- last message repeated 5 times --- May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 init_simulator_paths: No simulator root specified. Falling back to environment variable. May 30 17:20:39: --- last message repeated 1 time --- May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 load_application_info: Could not load signer identity from /Users/Niklas/Library/Application Support/iPhone Simulator/3.0/Applications/1CD7E4BA-14D3-45A5-A05E-E552C04BCD4D/BlockPopLite.app/BlockPopLite May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 load_application_info: Could not load signer identity from /Users/Niklas/Library/Application Support/iPhone Simulator/3.0/Applications/62585F19-5FAD-4548-89DF-C9AE621FCCD7/SysSound.app/SysSound May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 load_application_info: Could not load signer identity from /Users/Niklas/Library/Application Support/iPhone Simulator/3.0/Applications/81AA51A5-7BFC-442F-BFF8-91E9C6EF13CD/BlockPop.app/BlockPop May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0103000 load_application_info: Could not load signer identity from /Users/Niklas/Library/Application Support/iPhone Simulator/3.0/Applications/A2DCBF96-8F15-4527-BDF1-BD90B34D401C/BlockPop.app/BlockPop May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 init_simulator_paths: No simulator root specified. Falling back to environment variable. May 30 17:20:39: --- last message repeated 1 time --- May 30 17:20:39 Niklas-Johanssons-Mac-mini SpringBoard[15713]: Unable to create CFServerConnection. Telephony state may be incorrect. May 30 17:20:40 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 init_simulator_paths: No simulator root specified. Falling back to environment variable. May 30 17:20:40: --- last message repeated 1 time --- May 30 17:20:40 Niklas-Johanssons-Mac-mini SpringBoard[15713]: Unable to create CFServerConnection. Telephony state may be incorrect. May 30 17:20:40: --- last message repeated 2 times --- May 30 17:20:40 Niklas-Johanssons-Mac-mini SpringBoard[15713]: Can't find the translation dictionary, loadTranslationDictionaries May 30 17:20:40 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 init_simulator_paths: No simulator root specified. Falling back to environment variable. May 30 17:20:40: --- last message repeated 3 times --- May 30 17:20:40 Niklas-Johanssons-Mac-mini SpringBoard[15713]: Unable to create CFServerConnection. Telephony state may be incorrect. May 30 17:20:41: --- last message repeated 1 time --- May 30 17:20:41 Niklas-Johanssons-Mac-mini SpringBoard[15713]: Launchd returned an unexpected type or didn't return a value for job label UIKitApplication:com.iAndApp.BlockPop[0x8abd] with job key PID May 30 17:20:41 Niklas-Johanssons-Mac-mini SpringBoard[15713]: Unable to create CFServerConnection. Telephony state may be incorrect. May 30 17:21:10 Niklas-Johanssons-Mac-mini Xcode[15496]: Error launching simulated application: Error Domain=DTiPhoneSimulatorErrorDomain Code=1 UserInfo=0x200edcc00 "iPhone Simulator failed to find the process ID of com.iAndApp.BlockPop." Where do I start? I am really stuck and would be most greatful for any help!

    Read the article

  • Hadoop WordCount example stuck at map 100% reduce 0%

    - by Abhinav Sharma
    [hadoop-1.0.2] ? hadoop jar hadoop-examples-1.0.2.jar wordcount /user/abhinav/input /user/abhinav/output Warning: $HADOOP_HOME is deprecated. ****hdfs://localhost:54310/user/abhinav/input 12/04/15 15:52:31 INFO input.FileInputFormat: Total input paths to process : 1 12/04/15 15:52:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 12/04/15 15:52:31 WARN snappy.LoadSnappy: Snappy native library not loaded 12/04/15 15:52:31 INFO mapred.JobClient: Running job: job_201204151241_0010 12/04/15 15:52:32 INFO mapred.JobClient: map 0% reduce 0% 12/04/15 15:52:46 INFO mapred.JobClient: map 100% reduce 0% I've set up hadoop on a single node using this guide (http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job) and I'm trying to run a provided example but I'm getting stuck at map 100% reduce 0%. What could be causing this?

    Read the article

  • Problem with Writing files using FileWriter automatically with Quartz Scheduler

    - by Jeeva
    I have chosen nearly 200 files to write on a position automatically on a particular time. Created a separate job names in Quartz scheduler. The job will be triggered on a time. I can read the files only after all the files have been written. I could not read after one file is written. I have closed the FileWriter after one file written. What is the solution to access the file and read which have been written into the hard Disk

    Read the article

  • Object Oriented database development jobs

    - by GigaPr
    Hi, i am a software engineering student currently looking for a job as developer. I have been offered a position in a company which implements software using object oriented databases. these are something completely new for me as at university we never worked on it, just some theory. my questions are do you think is a good way to start my career as developer? what is the job market for this type of developemnt? are these skills requested? what markets this technology touches? thanks

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >