Search Results

Search found 7554 results on 303 pages for 'shared secret'.

Page 227/303 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Getting Started with Amazon Web Services in NetBeans IDE

    - by Geertjan
    When you need to connect to Amazon Web Services, NetBeans IDE gives you a nice start. You can drag and drop the "itemSearch" service into a Java source file and then various Amazon files are generated for you. From there, you need to do a little bit of work because the request to Amazon needs to be signed before it can be used. Here are some references and places that got me started: http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html https://affiliate-program.amazon.com/gp/flex/advertising/api/sign-in.html You definitely need to sign up to the Amazon Associates program and also register/create an Access Key ID, which will also get you a Secret Key, as well. Here's a simple Main class that I created that hooks into the generated RestConnection/RestResponse code created by NetBeans IDE: public static void main(String[] args) {    try {        String searchIndex = "Books";        String keywords = "Romeo and Juliet";        RestResponse result = AmazonAssociatesService.itemSearch(searchIndex, keywords);        String dataAsString = result.getDataAsString();        int start = dataAsString.indexOf("<Author>")+8;        int end = dataAsString.indexOf("</Author>");        System.out.println(dataAsString.substring(start,end));    } catch (Exception ex) {        ex.printStackTrace();    }} Then I deleted the generated properties file and the authenticator and changed the generated AmazonAssociatesService.java file to the following: public class AmazonAssociatesService {    private static void sleep(long millis) {        try {            Thread.sleep(millis);        } catch (Throwable th) {        }    }    public static RestResponse itemSearch(String searchIndex, String keywords) throws IOException {        SignedRequestsHelper helper;        RestConnection conn = null;        Map queryMap = new HashMap();        queryMap.put("Service", "AWSECommerceService");        queryMap.put("AssociateTag", "myAssociateTag");        queryMap.put("AWSAccessKeyId", "myAccessKeyId");        queryMap.put("Operation", "ItemSearch");        queryMap.put("SearchIndex", searchIndex);        queryMap.put("Keywords", keywords);        try {            helper = SignedRequestsHelper.getInstance(                    "ecs.amazonaws.com",                    "myAccessKeyId",                    "mySecretKey");            String sign = helper.sign(queryMap);            conn = new RestConnection(sign);        } catch (IllegalArgumentException | UnsupportedEncodingException | NoSuchAlgorithmException | InvalidKeyException ex) {        }        sleep(1000);        return conn.get(null);    }} Finally, I copied this class into my application, which you can see is referred to above: http://code.google.com/p/amazon-product-advertising-api-sample/source/browse/src/com/amazon/advertising/api/sample/SignedRequestsHelper.java Here's the completed app, mostly generated via the drag/drop shown at the start, but slightly edited as shown above: That's all, now everything works as you'd expect.

    Read the article

  • Traditional POS is Dead

    - by David Dorf
    Traditional POS is dead -- I've heard that one before. Here's an excerpt from Joe Skorupa's blog over at RIS where he relayed ten trends that were presented at NRF. 7. Mobile POS signals death of traditional POS. Shoppers don't love self-checkout, but they prefer it to long queues or dealing with associates. Fixed POS is expensive and bulky. Mobile POS frees floor space for other purposes and converts associates from being cashiers to being sales assistants that provide new levels of customer service and incremental basket sales. In addition to unplugging the POS, new alternatives are starting to take hold - thin client, POS as a service, and replacing POS software with e-commerce platforms. I'll grant that in some situations for some retailers there might be an opportunity to to ditch the traditional POS, but for the majority of retailers that's just not practical. Take it from a guy that had to wake up at 3am after every Thanksgiving to monitor POS systems across the US on Black Friday. If a retailer's website goes down on Black Friday, they will take a significant hit. If a retailer's chain-wide POS system goes down on Black Friday, that retailer will cease to exist. Mobile POS works great for Apple because the majority of purchases are one or two big-ticket items that don't involve cash. There's still a traditional POS in every store to fall back on (its just hidden). Try this at home: Choose your favorite e-commerce site and add an item to the cart while timing how long it takes. Now multiply that by 15 to represent the 15 items you might buy at store like Target. The user interface isn't optimized for bulk purchases, and that's how it should be. The webstore and POS are designed for different purposes. Self-checkout is a great addition to POS and so is mobile checkout. But they add capabilities to POS, not replace it. Centralized architectures, even those based in the cloud, are quite viable as long as there's resiliency in the registers. You cannot assume perfect access to the network, so a POS must always be able to sell regardless of connectivity. Clearly the different selling channels should be sharing common functionality. Things like calculating tax, accepting coupons, and processing electronic payments can be shared, usually through a service-oriented architecture. This lowers costs and providers greater consistency, both of which help retailers. On paper these technologies look really good and we should continue to push boundaries, but I'm not ready to call the patient dead just yet.

    Read the article

  • At $20/month Windows Azure host my website with 99.97% uptime

    - by Gopinath
    Couple of years ago a reliable and decent performing Windows hosting was not affordable to many enthusiastic developers who want to try a startup idea or build a hobby site. I tried to start an ASP.NET website few years ago to provide services like – Mobile Tracing, Vehicle Tracing. But due to high cost of Windows hosting I developed those services using PHP (not an easy task for .NET developer) and hosted on them Linux servers.  But with recent evolution of Windows Azure, hosting ASP.NET websites on highly reliable servers is affordable. Today anyone can host a high responsive and available ASP.NET website for just $20/month using Windows Azure. My website coziie.com is running on Windows Azure and serves close to quarter millions visitors a month with 99.97% of uptime and most of the page load times are less than 3 seconds. All I spend to run this website is just around $20, if you translate it to India rupees its roughly Rs.1000. The web sever of coziie.com is powered by a single Extra Small Web role instance and the backend is powered by a SQL Azure instance. Azure is quite impressive to provide 99.97% of uptime. Response times during peak are around 3 seconds and on nomarl loads it is around 1.5 seconds. Here is the report of uptime provided by Royal Pingdom over last one year For just $20/month Windows Azure takes care of the following apart from hosting Patches up Windows OS to the latest version Upgrades ASP.NET to the latest version – coziie.com is running on ASP.NET MVC 3 and soon I’ll upgrade it to ASP.NET MVC 4 Hosts data on latest and best version Sql Server database SQL Azure maintains 3 copies of database and automatically recovers in case of server failures and disasters. I never worry about database backups/restore. Provides staging environment for deploying applications for testing purpose and move them to production – I upgrade  twice a month on average With Windows Azure I no longer focus on server maintenance or data backups. They are taken up by Microsoft team and I just focus on building my website. Wish there is a low cost Linux version of Windows Azure so that I can stop worrying about server maintenance of this blog!! If you are looking for a Windows hosting, look no further than Windows Azure. If you find $20/month is a bit expensive to start with you may explore Azure Website (sort of shared hosted environment) which is free to start with and as your traffic grows you can move to paid hosting.

    Read the article

  • Embracing Imperfection

    - by Johnm
    The pursuit of perfection is a road on which we often find ourselves traveling. It is an unpaved road filed with pot-holes and ruts that often destroy our stride. The shoulders of this road are lined with the bones and rotting carcasses of well planned projects, solutions and dreams of others who have dared the journey. Often the choice to engage in this travel is a compulsive one. We can't help but to pack our bags and make the trip. We justify it by equating it to the delivery of a quality product or service. We use our past travels as validation of our worthiness and value. Our shared experience, as tortured pilgrims of perfection, reveals that each odyssey that bewitched us resulted in a stark reminder of the very weaknesses and fears that we were attempting to mollify. The voice of the critic that berated us for the lack of craftsmanship was our own. Although, at the end of the journey our own critical voice was joined by the gnashing of teeth of those who could not reap the fruit of your labor due to its lack of timely delivery. There is another road in which to travel. It is the pursuit of embracing imperfection. The cost of traveling this route is your contribution to its eternal construction. Each segment is designed uniquely. At times it has the appearance of a patchwork quilt; while other times it is well organized and highly measured. In all cases, its construction has continually advanced and been utilized as each segment was delivered by its architect. Those who choose to select this spindle of these crossroads crack open the shells of their fears to reveal the vapor that is within. They construct their houses upon these shells. Through their hunger for mastery they wring every drop of nectar from failure and discard its husks to the ditches of this road. Through their efforts the thoroughfare begins to develop a personality of its own, a beautifully human one, rich with the strengths and weaknesses of all of its contributors. Like many of us, the pursuit of perfection has not served me well. In fact, I would say that it has been more damaging than it has been helpful. While the perfectionist in me occasionally makes its presence known, I consider myself a "recovering perfectionist". It is evident to me that there is immense beauty found in imperfection. I choose to embrace it. It is grounding. It is constructive. It is honest.

    Read the article

  • Clone a VirtualBox Machine

    I just installed VirtualBox, which I want to try out based on recommendations from peers for running a server from within my Windows 7 x64 OS.  Ive never used VirtualBox, so Im certainly no expert at it, but I did want to share my experience with it thus far.  Specifically, my intention is to create a couple of virtual machines.  One I intend to use as a build server, for which a virtual machine makes sense because I can easily move it around as needed if there are hardware issues (its worth noting my need for setting up a build server at the moment is a result of a disk failure on the old build server).  The other VM I want to set up will act as a proxy server for the issue tracking system were using at Code Project, Axosoft OnTime.  They have a Remote Server application for this purpose, and since the OnTime install is 300 miles away from my location, the Remote Server should speed up my use of the OnTime client by limiting the chattiness with the database (at least, thats the hope). So, I need two VMs, and Im lazy.  I dont want to have to install the OS and such twice.  No problem, it should be simple to clone a virtualbox machine, or clone a virtualbox hard drive, right?  Well unfortunately, if you look at the UI for VirtualBox, theres no such command.  Youre left wondering How do I clone a VirtualBox machine? or the slightly related How do I clone a VirtualBox hard drive? If youve used VirtualPC, then you know that its actually pretty easy to copy and move around those VMs.  Not quite so easy with VirtualBox.  Finding the files is easy, theyre located in your user folder within the .VirtualBox folder (possibly within a HardDisks folder).  The disks have a .vdi extension and will be pretty large if youve installed anything.  The one shown here has just Windows Server 2008 R2 installed on it nothing else. If you copy the .vdi file and rename it, you can use the Virtual Media Manager to view it and you can create a new machine and choose the new drive to attach to.  Unfortunately, if you simply make a copy of the drive, this wont work and youll get an error that says something to the effect of: Cannot register the hard disk PATH with UUID {id goes here} because a hard disk PATH2 with UUID {same id goes here} already exists in the media registry (PATH to XML file). There are command line tools you can use to do this in a way that avoids this error.  Specifically, the c:\Program File\Sun\VirtualBox\VBoxManage.exe program is used for all command line access to VirtualBox, and to copy a virtual disk (.vdi file) you would call something like this: VBoxManage clonehd Disk1.vdi Disk1_Copy.vdi However, in my case this didnt work.  I got basically the same error I showed above, along with some debug information for line 628 of VBoxManageDisk.cpp.  As my main task was not to debug the C++ code used to write VirtualBox, I continued looking for a simple way to clone a virtual drive.  I found it in this blog post. The Secret setvdiuuid Command VBoxManage has a whole bunch of commands you can use with it just pass it /? to see the list.  However, it also has a special command called internalcommands that opens up access to even more commands.  The one thats interesting for us here is the setvdiuuid command.  By calling this command and passing in the file path to your vdi file, it will reset the UUID to a new (random, apparently) UUID.  This then allows the virtual media manager to cope with the file, and lets you set up new machines that reference the newly UUIDd virtual drive.  The full command line would be: VBoxManage internalcommands setvdiuuid MyCopy.vdi The following screenshot shows the error when trying clonehd as well as the successful use of setvdiuuid. Summary Now that I can clone machines easily, its a simple matter to set up base builds of any OS I might need, and then fork from there as needed.  Hopefully the GUI for VirtualBox will be improved to include better support for copying machines/disks, as this is Im sure a very common scenario. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Once installed geos library (C++, and C), and then trying to install rgeos package (R), it reports geos-config missing!

    - by user1873888
    Knowing that the package rgeos, from the R language, requieres a prior installation of geos libraries, I installed, both, libgeos and libgeos-c1 (3.2.2), using the synaptic installer in my Ubuntu 12.04 (32 bit) machine. Then I tried to install rgeos directly from the R console, and it issued a message in the sense that geos-config was not found. The output is as follows: > install.packages("rgeos") Installing package(s) into ‘/home/checo/R/i486-pc-linux-gnu-library/2.15’ (as ‘lib’ is unspecified) also installing the dependency ‘sp’ probando la URL 'http://cran.rstudio.com/src/contrib/sp_1.0-9.tar.gz' Content type 'application/x-gzip' length 882102 bytes (861 Kb) URL abierta ================================================== downloaded 861 Kb probando la URL 'http://cran.rstudio.com/src/contrib/rgeos_0.2-19.tar.gz' Content type 'application/x-gzip' length 221471 bytes (216 Kb) URL abierta ================================================== downloaded 216 Kb * installing *source* package ‘sp’ ... ** package ‘sp’ successfully unpacked and MD5 sums checked ** libs gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c R centroid.c -o Rcentroid.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c gcdist.c -o gcdist.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c init.c -o init.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c pip.c -o pip.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c pip2.c -o pip2.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c sp_xports.c -o sp_xports.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c surfaceArea.c -o surfaceArea.o gcc -std=gnu99 -I/usr/share/R/include -DNDEBUG -fpic -O3 -pipe -g -c zerodist.c -o zerodist.o gcc -std=gnu99 -shared -o sp.so Rcentroid.o gcdist.o init.o pip.o pip2.o sp_xports.o surfaceArea.o zerodist.o -L/usr/lib/R/lib -lR installing to /home/checo/R/i486-pc-linux-gnu-library/2.15/sp/libs ** R ** data ** demo ** inst ** preparing package for lazy loading ** help *** installing help indices ** building package indices ** installing vignettes ‘intro_sp.Rnw’ ‘over.Rnw’ ** testing if installed package can be loaded * DONE (sp) * installing *source* package ‘rgeos’ ... ** package ‘rgeos’ successfully unpacked and MD5 sums checked configure: CC: gcc -std=gnu99 configure: CXX: g++ configure: rgeos: 0.2-17 checking for /usr/bin/svnversion... no configure: svn revision: 394 checking geos-config usability... ./configure: line 1385: geos-config: command not found no configure: error: geos-config not usable ERROR: configuration failed for package ‘rgeos’ * removing ‘/home/checo/R/i486-pc-linux-gnu-library/2.15/rgeos’ Warning in install.packages : installation of package ‘rgeos’ had non-zero exit status Forgive my ignorance, but I don't know where this file, "geos-config", comes from: should it be generated by the gcc compilations above, or should it be previously installed when the libgeos libraries were intalled? I learnt, from another machine, that "geos-config" is an executable and that it should be installed in /usr/bin. Do you have any idea on what's wrong with my procedure? Thanks, -Sergio.

    Read the article

  • Access denied 403 errors after migrating my site

    - by AgA
    I've recently migrated my Joomla site from one shared hosting to another with Hostgator. GWT notified me about many 403 access denied pages. I've checked with Firebug too, and even though browser is displaying full page correctly but http return is 403. I've checked the home page but it's correctly returing 200 response. The same is shown by Fetch as Google in GWT(pasted this in the bottom). The site is 3 years old and I regularly do such migrations. I've copied the files and database "AS IS". I've even cleared all the caches but no luck. There is only one change: previously the site was primary domain but now it's add-on one. What could be the issue? This is how Googlebot fetched the page. Fetch as Google URL: http://MYSITE.COM/-----------------REMOVED.html Date: Thursday, June 20, 2013 at 10:32:14 PM PDT Googlebot Type: Web Download Time (in milliseconds): 3899 HTTP/1.1 403 Forbidden Date: Fri, 21 Jun 2013 05:32:15 GMT Server: Apache P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Expires: Mon, 1 Jan 2001 00:00:00 GMT Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: 0e4f6b53991c80cf39d57a6db58bb58d=ee2d880e8db0f1fc03c5612ea5a77004; path=/ Last-Modified: Fri, 21 Jun 2013 05:32:19 GMT Keep-Alive: timeout=5, max=75 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-gb" lang="en-gb" > <head> <base href="http://www.mysite.com/-----------------rajiv-yuva-shakthi-programme-finance-planning.html" /> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="robots" content="index, follow" /> <meta name="keywords" content="" /> <<<<<<TRIMMED>>>>>>>>>>>>>>

    Read the article

  • Solaris 11

    - by user9154181
    Oracle has a strict policy about not discussing product features until they appear in shipping product. Now that Solaris 11 is publically available, it is time to catch up. I will be shortly posting articles on a variety of new developments in the Solaris linkers and related bits: 64-bit Archives After 40+ years of Unix, the archive file format has run out of room. The ar and link-editor (ld) commands have been enhanced to allow archives to grow past their previous 32-bit limits. Guidance The link-editor is now willing and able to tell you how to alter your link lines in order to build better objects. Stub Objects This is one of the bigger projects I've undertaken since joining the Solaris group. Stub objects are shared objects, built entirely from mapfiles, that supply the same linking interface as the real object, while containing no code or data. You can link to them, but cannot use them at runtime. It was pretty simple to add this ability to the link-editor, but the changes to the OSnet in order to apply them to building Solaris were massive. I discuss how we came to invent stub objects, how we apply them to build the OSnet in a more parallel and scalable manner, and about the follow on opportunities that have emerged from the new stub proto area we created to hold them. The elffile Utility A new standard Solaris utility, elffile is a variant of the file utility, focused exclusively on linker related files. elffile is of particular value for examining archives, as it allows you to find out what is inside them without having to first extract the archive members into temporary files. This release has been a long time coming. I joined the Solaris group in late 2005, and this will be my first FCS. From a user perspective, Solaris 11 is probably the biggest change to Solaris since Solaris 2.0. Solaris 11 polishes the ground breaking features from Solaris 10 (DTrace, FMA, ZFS, Zones), and uses them to add a powerful new packaging system, numerous other enhacements and features, along with a huge modernization effort. I'm excited to see it go out into the world. I hope you enjoy using it as much as we did creating it. Software is never done. On to the next one...

    Read the article

  • Data Model Dissonance

    - by Tony Davis
    So often at the start of the development of database applications, there is a premature rush to the keyboard. Unless, before we get there, we’ve mapped out and agreed the three data models, the Conceptual, the Logical and the Physical, then the inevitable refactoring will dog development work. It pays to get the data models sorted out up-front, however ‘agile’ you profess to be. The hardest model to get right, the most misunderstood, and the one most neglected by the various modeling tools, is the conceptual data model, and yet it is critical to all that follows. The conceptual model distils what the business understands about itself, and the way it operates. It represents the business rules that govern the required data, its constraints and its properties. The conceptual model uses the terminology of the business and defines the most important entities and their inter-relationships. Don’t assume that the organization’s understanding of these business rules is consistent or accurate. Too often, one department has a subtly different understanding of what an entity means and what it stores, from another. If our conceptual data model fails to resolve such inconsistencies, it will reduce data quality. If we don’t collect and measure the raw data in a consistent way across the whole business, how can we hope to perform meaningful aggregation? The conceptual data model has more to do with business than technology, and as such, developers often regard it as a worthy but rather arcane ceremony like saluting the flag or only eating fish on Friday. However, the consequences of getting it wrong have a direct and painful impact on many aspects of the project. If you adopt a silo-based (a.k.a. Domain driven) approach to development), you are still likely to suffer by starting with an incomplete knowledge of the domain. Even when you have surmounted these problems so that the data entities accurately reflect the business domain that the application represents, there are likely to be dire consequences from abandoning the goal of a shared, enterprise-wide understanding of the business. In reading this, you may recall experiences of the consequence of getting the conceptual data model wrong. I believe that Phil Factor, for example, witnessed the abandonment of a multi-million dollar banking project due to an inadequate conceptual analysis of how the bank defined a ‘customer’. We’d love to hear of any examples you know of development projects poleaxed by errors in the conceptual data model. Cheers, Tony

    Read the article

  • New ZFS Encryption features in Solaris 11.1

    - by darrenm
    Solaris 11.1 brings a few small but significant improvements to ZFS dataset encryption.  There is a new readonly property 'keychangedate' that shows that date and time of the last wrapping key change (basically the last time 'zfs key -c' was run on the dataset), this is similar to the 'rekeydate' property that shows the last time we added a new data encryption key. $ zfs get creation,keychangedate,rekeydate rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool/export/home/bob creation Mon Mar 21 11:05 2011 - rpool/export/home/bob keychangedate Fri Oct 26 11:50 2012 local rpool/export/home/bob rekeydate Tue Oct 30 9:53 2012 local The above example shows that we have changed both the wrapping key and added new data encryption keys since the filesystem was initially created.  If we haven't changed a wrapping key then it will be the same as the creation date.  It should be obvious but for filesystems that were created prior to Solaris 11.1 we don't have this data so it will be displayed as '-' instead. Another change that I made was to relax the restriction that the size of the wrapping key needed to match the size of the data encryption key (ie the size given in the encryption property).  In Solaris 11 Express and Solaris 11 if you set encryption=aes-256-ccm we required that the wrapping key be 256 bits in length.  This restriction was unnecessary and made it impossible to select encryption property values with key lengths 128 and 192 when the wrapping key was stored in the Oracle Key Manager.  This is because currently the Oracle Key Manager stores AES 256 bit keys only.  Now with Solaris 11.1 this restriciton has been removed. There is still one case were the wrapping key size and data encryption key size will always match and that is where they keysource property sets the format to be 'passphrase', since this is a key generated internally to libzfs and to preseve compatibility on upgrade from older releases the code will always generate a wrapping key (using PKCS#5 PBKDF2 as before) that matches the key length size of the encryption property. The pam_zfs_key module has been updated so that it allows you to specify encryption=off. There were also some bugs fixed including not attempting to load keys for datasets that are delegated to zones and some other fixes to error paths to ensure that we could support Zones On Shared Storage where all the datasets in the ZFS pool were encrypted that I discussed in my previous blog entry. If there are features you would like to see for ZFS encryption please let me know (direct email or comments on this blog are fine, or if you have a support contract having your support rep log an enhancement request).

    Read the article

  • Accessing SQL Server data from iOS apps

    - by RobertChipperfield
    Almost all mobile apps need access to external data to be valuable. With a huge amount of existing business data residing in Microsoft SQL Server databases, and an ever-increasing drive to make more and more available to mobile users, how do you marry the rather separate worlds of Microsoft's SQL Server and Apple's iOS devices? The classic answer: write a web service layer Look at any of the questions on this topic asked in Internet discussion forums, and you'll inevitably see the answer, "just write a web service and use that!". But what does this process gain? For a well-designed database with a solid security model, and business logic in the database, writing a custom web service on top of this just to access some of the data from a different platform seems inefficient and unnecessary. Desktop applications interact with the SQL Server directly - why should mobile apps be any different? The better answer: the iSql SDK Working along the lines of "if you do something more than once, make it shared," we set about coming up with a better solution for the general case. And so the iSql SDK was born: sitting between SQL Server and your iOS apps, it provides the simple API you're used to if you've been developing desktop apps using the Microsoft SQL Native Client. It turns out a web service remained a sensible idea: HTTP is much more suited to the Big Bad Internet than SQL Server's native TDS protocol, removing the need for complex configuration, firewall configuration, and the like. However, rather than writing a web service for every app that needs data access, we made the web service generic, serving only as a proxy between the SQL Server and a client library integrated into the iPhone or iPad app. This client library handles all the network communication, and provides a clean API. OSQL in 25 lines of code As an example of how to use the API, I put together a very simple app that allowed the user to enter one or more SQL statements, and displayed the results in a rather primitively formatted text field. The total amount of Objective-C code responsible for doing the work? About 25 lines. You can see this in action in the demo video. Beta out now - your chance to give us your suggestions! We've released the iSql SDK as a beta on the MobileFoo website: you're welcome to download a copy, have a play in your own apps, and let us know what we've missed using the Feedback button on the site. Software development should be fun and rewarding: no-one wants to spend their time writing boiler-plate code over and over again, so stop writing the same web service code, and start doing exciting things in the new world of mobile data!

    Read the article

  • Self-Executing Anonymous Function vs Prototype

    - by Robotsushi
    In Javascript there are a few clearly prominent techniques for create and manage classes/namespaces in javascript. I am curious what situations warrant using one technique vs. the other. I want to pick one and stick with it moving forward. I write enterprise code that is maintained and shared across multiple teams, and I want to know what is the best practice when writing maintainable javascript ? I tend to prefer Self-Executing Anonymous Functions however I am curious what the community vote is on these techniques. Prototype : function obj() { } obj.prototype.test = function() { alert('Hello?'); }; var obj2 = new obj(); obj2.test(); Self-Closing Anonymous Function : //Self-Executing Anonymous Function (function( skillet, $, undefined ) { //Private Property var isHot = true; //Public Property skillet.ingredient = "Bacon Strips"; //Public Method skillet.fry = function() { var oliveOil; addItem( "\t\n Butter \n\t" ); addItem( oliveOil ); console.log( "Frying " + skillet.ingredient ); }; //Private Method function addItem( item ) { if ( item !== undefined ) { console.log( "Adding " + $.trim(item) ); } } }( window.skillet = window.skillet || {}, jQuery )); //Public Properties console.log( skillet.ingredient ); //Bacon Strips //Public Methods skillet.fry(); //Adding Butter & Fraying Bacon Strips //Adding a Public Property skillet.quantity = "12"; console.log( skillet.quantity ); //12 //Adding New Functionality to the Skillet (function( skillet, $, undefined ) { //Private Property var amountOfGrease = "1 Cup"; //Public Method skillet.toString = function() { console.log( skillet.quantity + " " + skillet.ingredient + " & " + amountOfGrease + " of Grease" ); console.log( isHot ? "Hot" : "Cold" ); }; }( window.skillet = window.skillet || {}, jQuery )); //end of skillet definition try { //12 Bacon Strips & 1 Cup of Grease skillet.toString(); //Throws Exception } catch( e ) { console.log( e.message ); //isHot is not defined } I feel that I should mention that the Self-Executing Anonymous Function is the pattern used by the jQuery team. Update When I asked this question I didn't truly see the importance of what I was trying to understand. The real issue at hand is whether or not to use new to create instances of your objects or to use patterns which do not require constructors of the use of the new keyword. I added my own answer, because in my opinion we should make use of patterns which don't use the new keyword. For more information please see my answer.

    Read the article

  • Segmentation and Targeting: Your Tools for Personalizing the Online Customer Experience

    - by Christie Flanagan
    In order to deliver the kind of personalized and engaging online experiences that customers expect today, look to segmentation and targeting.  Segmentation is the practice of dividing your site visitors into distinct groups based on shared characteristics or behavior – for example, a segment may consist of site visitors who have visited pages related to certain product type, or they may consist of visitors within the same age group or geographic area.  The idea is that those within a segment are more likely to have common needs, problems or interests that can be served by your business. Targeting is the process by which the most relevant content, whether an article promotion or other piece of content, is delivered to your visitors based on their segment membership. Segmentation and targeting are used to drive greater engagement on your web presence by delivering content to your site visitors that is tailored to their interests, behavior or other attributes.  You may have a number of different goals for your segmentation and targeting efforts: Up-sell or cross-sell to your customers Conduct A/B testing on your offers and creative Offer discounts, promotions or other incentives for the time and duration that you specify Make is easier to find relevant information about products and services Create premium content model There are two different approaches you can take toward segmentation and targeting for you online customer experience initiatives. The first is more of a manual process, in which marketers manage the process of determining which segments to create and which content to target to those segments. The benefit of this approach is that it gives marketers a high level of control over the whole process which works well when you have a thorough understanding of your segments and which content is most likely to serve their needs.  Tools for marketer managed segmentation and targeting are often built right in to your WEM platform, as they are with Oracle WebCenter Sites. The downside is that the more segments and content that you have, the more time consuming and complicated in can be to manage manually.The second approach relies on predictive intelligence to automate the segmentation and targeting process.  This allows optimization of the process to occur in real time. This approach helps reduce the burden of manual segmentation and targeting and can result in new insights into segments that you may never have thought of on your own.  It also provides you with the capability to quickly test new offers and promotions on your site.  Predictive segmentation and targeting can be achieved by using Oracle WebCenter Sites and Oracle Real-Time Decisions together. *****Get a taste for how Oracle WebCenter Sites and Oracle Real-Time Decisions combine to deliver powerful capabilities for predictive segmentation and targeting by watching this on demand webcast introducing Oracle WebCenter Sites 11g or by reading IDC’s take on the latest release of Oracle’s web experience management solution.  Be sure to return to the Oracle WebCenter blog on Thursday for a closer look at how to optimize the online customer experience using these two products together.

    Read the article

  • Oracle OpenWorld - Events of Interest

    - by Larry Wake
    I mentioned the "Focus On Oracle Solaris" document the other day, which lists many of the Solaris-related events at Oracle OpenWorld this year; today I thought I'd highlight a few sessions you might find interesting. Monday, October 1st: 4:45 PM - Get Proactive: Best Practices for Maintaining and Upgrading Oracle Solaris (Moscone South 252) This session covers best practices for upgrading and patching and how to take advantage of unique technologies in Oracle Solaris 10 and 11. Learn how to get maximum value from My Oracle Support for both reactive and proactive requirements. Understand the benefits of secure remote access and how Oracle Support experts use collaborative shared sessions combined with Oracle Solaris technologies such as DTrace. Tuesday, October 2nd: 10:15 AM -  How to Increase Performance and Agility with an Open Data Center Fabric (Moscone South 200) If you haven't had a chance to hear about Xsigo Systems, this is a golden opportunity while you're at OpenWorld. Now part of Oracle, Xsigo's network virtualization technology is designed to increase both application performance and management efficiency, through a combination of software-defined network technology and the industry’s fastest fabric, allowing data center to converge Ethernet and Fibre Channel connectivity to a single fabric, to reduce complexity by 70 percent and CapEx by 50 percent while providing more I/O bandwidth to your applications. Wednesday, October 3rd: 10:15 AM - General Session: Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap (Moscone South 103) Markus Flierl, head of Oracle Solaris Core Engineering, will outline the strategy and roadmap for Oracle Solaris,  how Oracle Solaris 11 is being deployed in cloud computing and the unique optimizations in Oracle Solaris 11 for the Oracle stack. The session also offers a sneak peek at the latest technology under development in Oracle Solaris, and what customers can expect to see in the coming updates. Plus, there are several Hands-On Labs: Monday, October 1st: 10:45 AM - 11:45 AM - Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications (Marriott Marquis - Salon 14/15) 4:45 PM - 5:45 PM - Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11  (Marriott Marquis - Salon 14/15) Tuesday, October 2nd: 1:15 PM - 2:15 PM - Virtualizing Your Oracle Solaris 11 Environment  (Marriott Marquis - Salon 10/11) Wednesday, October 3rd: 3:30 PM - 4:30 PM - Large-Scale Installation and Deployment of Oracle Solaris 11 (Marriott Marquis - Salon 14/15) There's plenty more--see the "Focus On Oracle Solaris" guide. See you next week in San Francisco!

    Read the article

  • My father wants to learn PHP-MySQL to port his application. What I should do to help?

    - by adijiwa
    My father is a doctor/physician. About 15 years ago he started writing an application to handle his patient's medical records in his clinic at home. The app has the ability to input patient's medical records (obviously), search patients by some criteria, manage medicine stocks, output receipt to printer, and some more CRUDs. He wrote it in dBase III+. A few years later he migrated to FoxPro 2.6 for DOS and finally in a few more years he rewrote his app in Visual FoxPro 9. And now (actually two years ago) he wants to rewrite it in PHP, but he don't know how. The Visual FoxPro version of this app is still running and has no serious problem except sometimes it performs slowly. Usually there are 1-5 concurrent users. The binary and database files are shared via windows share. He did all the coding as a hobby and for free (it is for his own clinic after all). He also use this app in two other offices he managed. Some reasons of why he wants to rewrite in PHP-MySQL: He wants to learn Easier to deploy (?) Easier client setup, need only a browser What should I do to help my father? How should he start? I explored some options: I let my father learn PHP and MySQL (and HTML (and JavaScript?)) from scratch. I create/bundle framework. I'm thinking on bundling CodeIgniter and a web UI framework (any suggestion?) especially to reduce effort on writing presentation codes. What do you think? tl;dr My father (a doctor) wants to rewrite his Visual FoxPro app in PHP-MySQL. He knows very little of PHP and MySQL but he wants to learn. What should I do to help? How should he start? Some facts: My father is 50 years old. His first encounter with a PC is in early 1980s. It was IBM PC with Intel 8088. He knows BASIC. He taught me how to use DOS and how to program with BASIC. The other language he knows fairly well is dBase/FoxPro. I got my bachelor CS degree last year. I know the internals of my father's app because sometime he wants me to help him writing his app. Sorry for my english.

    Read the article

  • Got Samba, Got PyNeighbourhood but still no connection. What else do I need?

    - by Frank A
    I am sure I had already hit post before but then could only find it by backing through browser. Was it deleted? is the question too dumb, sorry that I do not know the right jargon just trying to get answers to my problem anyway have reworded stuff a bit This seems to be a number one requirement for lots of people and 2 months on from setting up my Ubuntu pc, I am still unable to get a lasting connection in either direction. Adding a windows pc to a network is so easy... just a few clicks and get on with using it all. Using all command approaches and modifying configuration files is hardly user friendly. Googling brings up thousands of solutions but mostly they are too techy or assume the user is fully aware of how to use Linux. I do realise that their must be a lot of flavours for connecting to networks. So far I have installed Samba and fiddled with its config file. The day I did all that it worked from XP to Ubuntu. When I came back two days later to transfer my data over it would not connect. Although the the share does show up in Windows (XP) My Network Places. Today I installed PyNeighbourhood and this shows the Ubuntu box and all of the shares I had created at some point on Ubuntu and it even shows this under the XP workgroup name. But instructions on setting the connection up seem to relate to an earlier version and nothing seems to work there either. (I unshared most of those test folders but they still show up her but that is another question. When I click on mount- I can only click on one on the Ubuntu machine, there is one with no name so I assume this to be my attempt to add one XP Shared drive using ipaddress, I get errors. (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) OK tried to find the manual referred to... only an old comment that manual would be produced for future versions. I saw in another thread that Winbind is needed as well or at least I assume as well? Totally lost again? Please help, what else needs to be installed to connect to win pcs on the network.

    Read the article

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Is return-type-(only)-polymorphism in Haskell a good thing?

    - by dainichi
    One thing that I've never quite come to terms with in Haskell is how you can have polymorphic constants and functions whose return type cannot be determined by their input type, like class Foo a where foo::Int -> a Some of the reasons that I do not like this: Referential transparency: "In Haskell, given the same input, a function will always return the same output", but is that really true? read "3" return 3 when used in an Int context, but throws an error when used in a, say, (Int,Int) context. Yes, you can argue that read is also taking a type parameter, but the implicitness of the type parameter makes it lose some of its beauty in my opinion. Monomorphism restriction: One of the most annoying things about Haskell. Correct me if I'm wrong, but the whole reason for the MR is that computation that looks shared might not be because the type parameter is implicit. Type defaulting: Again one of the most annoying things about Haskell. Happens e.g. if you pass the result of functions polymorphic in their output to functions polymorphic in their input. Again, correct me if I'm wrong, but this would not be necessary without functions whose return type cannot be determined by their input type (and polymorphic constants). So my question is (running the risk of being stamped as a "discussion quesion"): Would it be possible to create a Haskell-like language where the type checker disallows these kinds of definitions? If so, what would be the benefits/disadvantages of that restriction? I can see some immediate problems: If, say, 2 only had the type Integer, 2/3 wouldn't type check anymore with the current definition of /. But in this case, I think type classes with functional dependencies could come to the rescue (yes, I know that this is an extension). Furthermore, I think it is a lot more intuitive to have functions that can take different input types, than to have functions that are restricted in their input types, but we just pass polymorphic values to them. The typing of values like [] and Nothing seems to me like a tougher nut to crack. I haven't thought of a good way to handle them. I doubt I am the first person to have had thoughts like these. Does anybody have links to good discussions about this Haskell design decision and the pros/cons of it?

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

  • Interaction of a GUI-based App and Windows Service

    - by psubsee2003
    I am working on personal project that will be designed to help manage my media library, specifically recordings created by Windows Media Center. So I am going to have the following parts to this application: A Windows Service that monitors the recording folder. Once a new recording is completed that meets specific criteria, it will call several 3rd party CLI Applications to remove the commercials and re-encode the video into a more hard-drive friendly format. A controller GUI to be able to modify settings of the service, specifically add new shows to watch for, and to modify parameters for the CLI Applications A standalone (GUI-based) desktop application that can perform many of the same functions as the windows service, expect manually on specific files instead of automatically based on specific criteria. (It should be mentioned that I have limited experience with an application of this complexity, and I have absolutely zero experience with Windows Services) Since the 1st and 3rd bullet share similar functionality, my design plan is to pull the common functionality into a separate library shared by both parts applications, but these 2 components do not need to interact otherwise. The 2nd and 3rd bullets seem to share some common functionality, both will have a GUI, both will have to help define similar parameters (one to send to the service and the other to send directly to the CLI applications), so I can see some advantage to combining them into the same application. On the other hand, the standalone application (bullet #3) really does not need to interact with the service at all, except for possibly sharing a few common default parameters that can easily be put into an XML in a common location, so it seems to make more sense to just keep everything separate. The controller GUI (2nd bullet) is where I am stuck at the moment. Do I just roll this functionality (allow for user interaction with the service to update settings and criteria) into the standalone application? Or would it be a better design decision to keep them separate? Specifically, I'm worried about adding the complexity of communicating with the Windows Service to the standalone application when it doesn't need it. Is WCF the right approach to allow the controller GUI to interact with the Windows Service? Or is there a better alternative? At the moment, I don't envision a need for a significant amount of interaction, maybe just adding a new task once in a while and occasionally tweaking a parameter, but when something is changed, I do expect the windows service to immediately use the new settings.

    Read the article

  • After startup, display reverts to one desktop duplicated on both screens

    - by Skilldrick
    I have a Samsung R530 laptop. I'm actually running Linux Mint, which is based on Ubuntu (which is why I thought it would be OK to post this here). I'm now getting exactly the same problem on Ubuntu 10.10. I've got it setup with a secondary LCD monitor, and I've changed the display settings so the laptop screen and monitor are at optimal resolution, with the desktop shared between them. After startup though (typically around 10-30 seconds) the display reverts to 1024 x 768 on both screens, and the display is duplicated (i.e. both screens show the same thing). I think this is something to do with my video card, as I had the same problem when I was running Arch Linux. Any ideas? Edit: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 09) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) 00:1a.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1a.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03) 00:1d.0 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB Controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.7 USB Controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation ICH9M/M-E SATA AHCI Controller (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 02:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) 04:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8040 PCI-E Fast Ethernet Controller It seems it's Intel, not Nvidia. I have a current work around, which is to have a launcher to a script containing xrandr --output LVDS1 --mode 1366x768 --output HDMI1 --mode 1280x1024 --left-of LVDS1 on my desktop, and to run it after the displays revert, but it's not ideal.

    Read the article

  • MySQL Workbench 6.2.1 BETA has been released

    - by user12602715
    The MySQL Workbench team is announcing availability of the first beta release of its upcoming major product update, MySQL  Workbench 6.2. MySQL Workbench 6.2 focuses on support for innovations released in MySQL 5.6 and MySQL 5.7 DMR (Development Release) as well as MySQL Fabric 1.5, with features such as: A new spatial data viewer, allowing graphical views of result sets containing GEOMETRY data and taking advantage of the new GIS capabilities in MySQL 5.7. Support for new MySQL 5.7.4 SQL syntax and configuration options. Metadata Locks View shows the locks connections are blocked or waiting on. MySQL Fabric cluster connectivity - Browsing, view status, and connect to any MySQL instance in a Fabric Cluster. MS Access migration Wizard - easily move to MySQL Databases. Other significant usability improvements were made, aiming to raise productivity for advanced and new users: Direct shortcut buttons to commonly used features in the schema tree. Improved results handling. Columns have better auto-sizing and their widths are saved. Fonts can also be customized. Results "pinned" to persist viewing data. A convenient Run SQL Script command to directly execute SQL scripts, without loading them first. Database Modeling has been updated to allow changes to the formatting of note objects and attached SQL scripts can now be included in forward engineering and synchronization scripts. Integrated Visual Explain within the result set panel. Visual Explain drill down for large to very large explain plans. Shared SQL snippets in the SQL Editor, allowing multiple users to share SQL code by storing it within a MySQL instance. And much more. The list of provided binaries was updated and MySQL Workbench binaries now available for: Windows 7 or newer Mac OS X Lion or newer Ubuntu 12.04 LTS and Ubuntu 14.04 Fedora 20 Oracle Linux 6.5 Oracle Linux 7 Sources for building in other Linux distributions For the full list of changes in this revision, visit http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-2.html For discussion, join the MySQL Workbench Forums: http://forums.mysql.com/index.php?151 Download MySQL Workbench 6.2.1 now, for Windows, Mac OS X 10.7+, Oracle Linux 6 and 7, Fedora 20, Ubuntu 12.04 and Ubuntu 14.04 or sources, from: http://dev.mysql.com/downloads/tools/workbench/ On behalf of the MySQL Workbench and the MySQL/ORACLE RE Team.

    Read the article

  • Keeping your options open in a cloud solution

    - by BuckWoody
    In on-premises solutions we have the full range of options open for a given computing solution – but we don’t always take advantage of them, for multiple reasons. Data goes in a Relational Database Management System, files go on a share, and e-mail goes to the Exchange server. Over time, vendors (including ourselves) add in functionality to one product that allow non-standard use of the platform. For example, SQL Server (and Oracle, and others) allow large binary storage in or through the system – something not originally intended for an RDBMS to handle. There are certainly times when this makes sense, of course, but often these platform hammers turn every problem into a nail. It can make us “lazy” in our design – we sometimes don’t take the time to learn another architecture because the one we’ve spent so much time with can handle what we want to do. But there’s a distinct danger here. In nature, when a population shares too many of the same traits, it can cause a complete collapse if a situation exploits a weakness shared by that population. The same is true with not using the righttool for the job in a computing environment. Your company or organization depends on your knowledge as a professional to select the best mix of supportable, flexible, cost-effective technologies to solve their problems, whether you’re in an architect role or not.  So take some time today to learn something new. The way I do this is to select a given problem, and try to solve it with a technology I’m not familiar with. For instance – create a Purchase Order system in Excel, then in Hadoop or MongoDB, or even in flat-files using PowerShell as an interface. No, I’m not suggesting any of these architectures are the proper way to solve the PO problem, but taking something concrete that you know well and applying that meta-knowledge to another platform will assist you in exercising the “little grey cells” and help you and your organization understand what is open to you. And of course you can do all of this on-premises – but my recommendation is to check out a cloud platform (my suggestion would of course be Windows Azure :) ) and try it there. Most providers (including Microsoft) provide free time to do that.

    Read the article

  • Taking a Projects Development to the Next Level

    - by user1745022
    I have been looking for some advice for a while on how to handle a project I am working on, but to no avail. I am pretty much on my fourth iteration of improving an "application" I am working on; the first two times were in Excel, the third Time in Access, and now in Visual Studio. The field is manufacturing. The basic idea is I am taking read-only data from a massive Sybase server, filtering it and creating much smaller tables in Access daily (using delete and append Queries) and then doing a bunch of stuff. More specifically, I use a series of queries to either combine data from multiple tables or group data in specific ways (aggregate functions), and then I place this data into a table (so I can sort and manipulate data using DAO.recordset and run multiple custom algorithms). This process is then repeated multiple times throughout the database until a set of relevant tables are created. Many times I will create a field in a query with a value such as 1.1 so that when I append it to a table I can store information in the field from the algorithms. So as the process continues the number of fields for the tables change. The overall application consists of 4 "back-end" databases linked together on a shared drive, with various output (either front-end access applications or Excel). So my question is is this how many data driven applications that solve problems essentially work? Each backend database is updated with fresh data daily and updating each takes around 10 seconds (for three) and 2 minutes(for 1). Project Objectives. I want/am moving to SQL Server soon. Front End will be a Web Application (I know basic web-development and like the administration flexibility) and visual-studio will be IDE with c#/.NET. Should these algorithms be run "inside the database," or using a series of C# functions on each server request. I know you're not supposed to store data in a database unless it is an actual data point, and in Access I have many columns that just hold calculations from algorithms in vba. The truth is, I have seen multiple professional Access applications, and have never seen one that has the complexity or does even close to what mine does (for better or worse). But I know some professional software applications are 1000 times better then mine. So Please Please Please give me a suggestion of some sort. I have been completely on my own and need some guidance on how to approach this project the right way.

    Read the article

  • The Connected Company: WebCenter Portal Activity Streams

    - by Michael Snow
    Guest post by Mitchell Palski, Oracle Staff Sales Consultant Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Social media is sure to have made its way into your company or government organization. Whether its discussion threads, blog posts, Facebook-style profile-pages, or just a simple Instant Messenger application; in one way or another, your employees are connected. What are the objectives of leveraging social media in your organization? Facilitating knowledge transfer More effectively organizing team events Generating inter-community discussions to solve problems Improving resource management Increasing organizational awareness Creating an environment of accountability Do any of the business objectives above stand out to you as needs? If so, consider leveraging the WebCenter Portal Activity Stream as part of your solution. In WebCenter Portal, the Activity Stream feature provides a streaming view of the activities of your connections, actions taken in portals, and business activities that looks a lot like a combined Facebook and Twitter newsfeed. Activity Stream can note when a user: Posts feedback (comments) Uploads a document Creates a new blog, page, event, or announcement Starts a new discussion Streams messages and attachments entered through WebCenter Publisher (similar to Twitter) Through Activity Stream Preferences, you can select which of these activities to show or hide from your personal Activity Stream. Here’s what you get: Real-time stream of activities with in a Portal or sub-Portal increases awareness across your organization or within a working group Complete list of user actions reduces the time-to-find for users that need to interact with the latest activities in your portal Users can publish to their groups when tasks are finished for complete group traceability and accountability, as well as improved resource management. Project discussions and shared documents that require the expertise of someone outside of a working group now get increased visibility across your organization. There’s a reason that commercial Social Media tools like Facebook and Twitter have been so successful – they spread information in an aesthetically appealing and easy to read format.  Strategically placing an Activity Feed within your Portal is analogous to sending your employees a daily newsletter, events calendar, recent documents report, and list of announcements – BUT ALL IN ONE! 

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >