Search Results

Search found 9983 results on 400 pages for 'fuzzy c means'.

Page 164/400 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Push or Pull Mobile Coupons?

    - by David Dorf
    Mobile phones allow consumers to receive coupons in context, which increases their relevance and therefore redemption rates. Using your current location, you can get coupons that can be redeemed nearby for the things you want now. Receiving a coupon for something you wanted last week or something you might buy next month just isn't as valuable. I previously talked about Placecast and their concept of pushing offers to mobile phones that transgress "geo-fences" around points of interest, like store locations. This push model is an automatic reminder there are good deals just up ahead. This model works well in dense cities where people walk, but I question how effective it will be in the suburbs where people are driving. McDonald's recently ran a campaign in Finland where they pushed offers to GPS devices when cars neared their restaurants. Amazingly, they achieved a 7% click-through rate. But 8coupons.com sees things differently. They prefer the pull model that requires customers to initiate a search for nearby coupons, and they've done some studies to better understand what "nearby" means. It turns out that there are concentric search circles that emanate from your home and work. From inner to outer, people search for food, drink, shopping, and entertainment. Intuitively, that feels about right. So the question is, do consumers prefer the push or pull model for offers? No doubt the market is big enough for both. These days its not good enough to just know who your customers are -- you also need to know where they are so you can catch them in the right moment. According to Borrell Associates, redemption rates of mobile coupons are 10x that of traditional mail and newspaper coupons. One thing is for sure; assuming 85% of consumers regularly spend money within 5 miles of home and work, location-based coupons make tons of sense.

    Read the article

  • Fiber-Optic Cable Trick Brings Remote Triggering to Older Flashes

    - by Jason Fitzpatrick
    Many older flashes lack for a jack to input a sync cable and rely exclusively on a simple slave mode triggered by the primary flash. This hack uses a piece of scrap fiber optic cable to trigger the flash in bright conditions. Using a flash as an optical slave indoors isn’t much of a problem, but if you introduce bright light (such as outdoor lighting conditions), the ambient light can overpower the small on-camera flash and render the optical slave function useless. To overcome this, Marcell over at Fiber Strobe (a blog dedicated to cataloging experiments in incorporating fiber optics into photography) came up with a simple work around. By using some foam crafting materials and tape, he whipped up a simple mount for a strand of scrap fiber optic cable to connect between the on-camera flash and the sensor on the slave flash. Once attached it works exactly like as sync cable would, except it’s transmitting a pulse of light instead of a pulse of electricity. Hit up the link below for more pictures and a build guide. DIY Fiber Sync Cord [via DIY Photography] HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization

    Read the article

  • CodePlex Daily Summary for Wednesday, June 04, 2014

    CodePlex Daily Summary for Wednesday, June 04, 2014Popular ReleasesSEToolbox: SEToolbox 01.032.018 Release 1: Added ability to merge/join two ships, regardless of origin. Added Language selection menu to set display text language (for SE resources only), and fixed inherent issues. Added full support for Dedicated Servers, allowing use on PC without Steam present, and only the Dedicated Install. Added Browse button for used specified save game selection in Load dialog. Installation of this version will replace older version.DNN Blog: 06.00.07: Highlights: Enhancement: Option to show management panel on child modules Fix: Changed SPROC and code to ensure the right people are notified of pending comments. (CP-24018) Fix: Fix to have notification actions authentication assume the right module id so these will work from the messaging center (CP-24019) Fix: Fix to issue in categories in a post that will not save when no categories and no tags selectedcsv2xlsx: Source code: Source code of applicationHigLabo: HigLabo_20140603: Bug fix of MimeParser for nested MimeContent parsing.Family Tree Analyzer: Version 4.0.0.0-beta2: This major release introduces a significant new feature the ability to search the UK Ordnance Survey 50k Gazetteer for small placenames that often don't appear on a modern Google Map. This means significant numbers of locations that were previously not found by Google Geocoding will now be found by using OS Geocoding. In addition I've added support for loading the Scottish 1930 Parish boundary maps, whilst slightly different from the parish boundaries in use in the 1800s that are familiar to...TEncoder: 4.0.0: --4.0.0 -Added: Video downloader -Added: Total progress will be updated more smoothly -Added: MP4Box progress will be shown -Added: A tool to create gif image from video -Added: An option to disable trimming -Added: Audio track option won't be used for mpeg sources as default -Fixed: Subtitle position wasn't used -Fixed: Duration info in the file list wasn't updated after trimming -Updated: FFMpegSSIS Multiple Hash: Multiple Hash 1.6.2.3: Release Notes This release is an SQL 2014 fix release. It addresses the following: The 1.6.1.2 release's SQL 2014 installer didn't work for SQL 2014 RTM. The x64 installer also showed x86 for both x64 and x86 in SQL 2014. Please download the x64 of x32 file based on the bitness of your SQL Server installation. BIDS or SSDT/BI ONLY INSTALLS ARE NOT DETECTED. You MUST use the Custom install, to install when the popup is shown, and select which versions of SQL Server are available. A war...QuickMon: Version 3.14 (Pie release): This is unofficially the 'Pie' release. There are two big changes.1. 'Presets' - basically templates. Future releases might build on this to allow users to add more presets. 2. MSI Installer now allows you to choose components (in case you don't want all collectors etc.). This means you don't have to download separate components anymore (AllAgents.zip still included in case you want to use them separately) Some other changes:1. Add/changed default file extension for monitor packs to *.qmp (...VeraCrypt: VeraCrypt version 1.0d: Changes between 1.0c and 1.0d (03 June 2014) : Correct issue while creating hidden operating system. Minor fixes (look at git history for more details).Keepass2Android: 0.9.4-pre1: added plug-in support: See settings for how to get plug-ins! published QR plug-in (scan passwords, display passwords as QR code, transfer entries to other KP2A devices) published InputStick plugin (transfer credentials to your PC via bluetooth - requires InputStick USB stick) Third party apps can now simply implement querying KP2A for credentials. Are you a developer? Please add this to your app if suitable! added TOTP support (compatible with KeeOTP and TrayTotp) app should no l...The IRIS Toolbox Project: IRIS Reference Manual Release 20140602: Reference Manual for Release 20140602.Microsoft Web Protection Library: AntiXss Library 4.3.0: Download from nuget or the Microsoft Download Center This issue finally addresses the over zealous behaviour of the HTML Sanitizer which should now function as expected once again. HTML encoding has been changed to safelist a few more characters for webforms compatibility. This will be the last version of AntiXSS that contains a sanitizer. Any new releases will be encoding libraries only. We recommend you explore other sanitizer options, for example AntiSamy https://www.owasp.org/index....Z SqlBulkCopy Extensions: SqlBulkCopy Extensions 1.0.0: SqlBulkCopy Extensions provide MUST-HAVE methods with outstanding performance missing from the SqlBulkCopy class like Delete, Update, Merge, Upsert. Compatible with .NET 2.0, SQL Server 2000, SQL Azure and more! Bulk MethodsBulkDelete BulkInsert BulkMerge BulkUpdate BulkUpsert Utility MethodsGetSqlConnection GetSqlTransaction You like this library? Find out how and why you should support Z Project Become a Memberhttp://zzzproject.com/resources/images/all/become-a-member.png|ht...Tweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.CreateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweetinvi 0.9.3.1...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...SoundCloud Downloader: SC Downloader V1.0: Newest release Requirements .NET Framework 4.0/4.5 Changes since releasing the source Replaced ProgressBar and Buttons with the ones i created from scratch, you may use the .dll files on your own risk. Changed the authentication mode to Windows NOTE! When extracted, execute setup.exe to launch the installer! NOTE! ENJOY!VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksNew Projects2112110148: 2112110148 Kieu Thi Phuong AnhAutoHelp: Inspired by Docy, AutoHelp exposes XML documentation embedded in sources code. a MVC 5 / typescripted connected on Webapi, AutoHelp is a modern HTML 5 appFeedbackata: An exercise in interactive development.Ideaa: A place to get inspired and find ideas about a topic you are interested inImpression Studio - Free presentation Maker Tool: Impression Studio is a free presentation maker tool based on impress.js library. It aims to be free alternative to prezi. This tool is under heavy development.LavaMite: LavaMite is a program that aims to capture the random wax formations that arise in a lava lamp before it is bubbling freely.Load Runner - HTTP Pressure Test Tool: A Lightweight HTTP pressure test tool supports multi methods / content type / requests, written in C# 4.0.Microsoft Research Software Radio SDK: Microsoft Research Software Radio SDKMsp: MspPower Torrent: Windows PowerShell module enabling BitTorrent (R) functionalities.

    Read the article

  • Is comparing an OO compiler to a SQL compiler/optimizer valid?

    - by Brad
    I'm now doing a lot of SQL development at my new job where as before I was doing Object Oriented desktop app stuff. I keep running across very large scripts (thousands of lines) and wanting to refactor in some way. I am seeing that SQL is a different sort of beast and it's probably fine to have these big scripts for the most part but while explaining this to me people are also insisting that the whole idea of refactoring is bad. That stuff like the .NET compiler are actually burdened by refactored code and that a big wall of code is more efficient and better design than code designed for reuse, readability and scalability. The other argument is that OO compilers are almost dangerously inefficient and don't have efficient memory management or runs too many CPU instructions compared to older "simpler" compilers and compared to SQL. Are these valid complaints? Even if some compiler like a C compiler is modestly more "efficient" (whatever that means on this high of a level without seeing code) would you want to write applications in C over C# or Java? Is comparing an OO compiler to a SQL compiler/optimizer even valid?

    Read the article

  • Automated Error Reporting in .NET Reflector - harnessing the most powerful test rig in existence

    - by Alex.Davies
    I know a testing system that will find more bugs than all the unit testing, integration testing, and QA you could possibly do. And the chances are you're not using it. It's called your users. It's a cliché that you should test so that you find your bugs rather than your users. Of course you should. But it's also a cliché that no software is ever shipped bug-free. Lost cause? No, opportunity! I think .NET Reflector 6 is pretty stable. In fact I know exactly how stable it is, because some (surprisingly high) proportion of its users tell me every time it crashes: If they press "Send Error Report", I get: And then I fix it. As a rough guess, while a standard stack trace is enough to fix a problem 30% of the time, having all those local variables in the stack trace means I can fix it about 80% of the time. How does this all happen? Did it take ages to code this swish system? Nope, it was one checkbox in SmartAssembly. It adds some clever code to your assembly to capture local variables every time an exception is thrown, and to ask your user to report it to you, with a variety of other useful information. Of course not all bugs show up as exceptions. But if you get used to knowing that SmartAssembly will tell you when an exception happens, you begin to change your coding style. Now, as long as an exception gets thrown in any situation you don't expect, you'll fix it if it ever happens. You'll start throwing exceptions liberally, and stop having to think about whether tiny edge cases are possible, as long as they throw an exception if they happen.

    Read the article

  • Is it bad practice for services to share a database in SOA?

    - by Paul T Davies
    I have recently been reading Hohpe and Woolf's Enterprise Integration Patterns, some of Thomas Erl's books on SOA and watching various videos and podcasts by Udi Dahan et al. on CQRS and Event Driven systems. Systems in my place of work suffer from high coupling. Although each system theoretically has its own database, there is a lot of joining between them. In practice this means there is one huge database that all systems use. For example, there is one table of customer data. Much of what I've read seems to suggest denormalising data so that each system uses only its database, and any updates to one system are propagated to all the others using messaging. I thought this was one of the ways of enforcing the boundaries in SOA - each service should have its own database, but then I read this: http://stackoverflow.com/questions/4019902/soa-joining-data-across-multiple-services and it suggests this is the wrong thing to do. Segregating the databases does seem like a good way of decoupling systems, but now I'm a bit confused. Is this a good route to take? Is it ever recommended that you should segregate a database on, say an SOA service, an DDD Bounded context, an application, etc?

    Read the article

  • Ubuntu security with services running from /opt

    - by thejartender
    It took me a while to understand what's going on here (I think), but can someone explain to me if there are security risks with regards to my logic of what's going on here as I am trying to set up a home web server as a developer with some good Linux knowledge? Ubuntu is not like other systems, as it has restricted the root user account. You can not log in as root or su to root. This was a problem for me as I have had to install numerous applications and services to /opt as per user documentation (XAMPPfor Linux is a good example). The problem here is that this directory is owned by root:root. I notice that my admin user account does not belong to root group through the following command: groups username so my understanding is that even though the files and services that I place in /opt belong to root, executing them by means of sudo (as required) does not mean that they are run as root? I imagine that the sudo command is hidden somewhere under belonging to the root user and has a 775 permission? So the question I have is if running a service like Tomcat, Apcahe, etc exposes my system like on other systems? Obviously I need to secure these in configurations, but isn't the golden rule to never run something as root? What happens if I have multiple services running under same user/group with regards to a compromised server?

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • Why do I have a gnomekeyring.IOError when doing "quickly share"?

    - by Agmenor
    When I want to push my app to Launchpad by doing quickly share --verbose, I get the following Gnome Keyring error: Get Launchpad Settings Traceback (most recent call last): File "/usr/share/quickly/templates/ubuntu-application/share.py", line 101, in <module> launchpad = launchpadaccess.initialize_lpi() File "/usr/lib/python2.7/dist-packages/quickly/launchpadaccess.py", line 91, in initialize_lpi allow_access_levels=["WRITE_PRIVATE"]) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 539, in login_with credential_save_failed, version) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 342, in _authorize_token_and_login authorization_engine.unique_consumer_id) File "/usr/lib/python2.7/dist-packages/launchpadlib/credentials.py", line 282, in load return self.do_load(unique_key) File "/usr/lib/python2.7/dist-packages/launchpadlib/credentials.py", line 336, in do_load 'launchpadlib', unique_key) File "/usr/lib/python2.7/dist-packages/keyring/core.py", line 34, in get_password return _keyring_backend.get_password(service_name, username) File "/usr/lib/python2.7/dist-packages/keyring/backend.py", line 154, in get_password items = gnomekeyring.find_network_password_sync(username, service) gnomekeyring.IOError ERROR: share command failed Aborting This used to work, so this means that I already have SSH and GPG configured. This is probably part of the explanation: I have this error when I am connected to this machine through a ssh tunnel with X forwarding. But I don't have it when I have physical access to the computer. Could you please give me some indications on what to do?

    Read the article

  • How to Hibernate from .NET Apps and How to enable Hibernate in Windows XP

    The usage of Computer desktop or laptop is increased all around the world phenomenally. This link gives you the picture on how power consumption is for various devices we use daily. to reduce the power consumption Hibernate is one of the best way provided by default in Windows Vista or Windows 7. Hibernate feature enables you to close the machine without closing your applications, that means the applications will be restored as they were once we restart the machine. Hibernate feature is not enabled in Windows XP by default. I’ve seen many people that they run (do not switch off) the machines months and months as they do not want to close the windows or applications running in Windows XP. below are the steps to enable Hibernate in Windows XP. Right click on Desktop. Click on properties. Go to screen save tab. Click on power button Select Hibernate tab Check the checkbox “Enabled Hibernate” Apply the settings. Now when you try to shutdown, “Shut down windows” dialog shows “Hibernate” options. Now you can safely close the machine without closing your applications or windows as they will be restored once you on the machine. </SPAN? Some time you might want to provide this future programmatically for the applications you develop for windows. Generally you might want to provide this option in windows applications where process needs huge time. Download managers are the one of the best example. below is the code to do a Hibernate from the .NET code. using System.Windows.Forms;namespace CodeKicks.WinApp.Machine{ public static class MyMachineHelper { public static void DoHibernate() { //Application.SetSuspendState(PowerState.Suspend, true, false); Application.SetSuspendState(PowerState.Hibernate, true, false); } }} span.fullpost {display:none;}

    Read the article

  • Is HTML5/WebGL performance unreliable on low-end Android tablets and phones?

    - by Boris van Schooten
    I've developed a couple of WebGL games, and am trying them out on Android. I found that they run very slowly on my tablet, however. For example, a game with 10 sprites or so runs as 5fps. I tried Chrome and CocoonJS, but they are comparably slow. I also tried other games, and even games with only 5 or so moving sprites are this slow. This seems inconsistent with reports from others, such as this benchmark. Typically, when people talk about HTML5 game performance, they mention well-known and higher-end phones and tables. While my 7" tablet is cheap (I believe it's a relabeled Allwinner tablet, apparently with the Mali 400 GPU), I found it generally has a good gaming performance. All the games I tried run smoothly. I also developed an OpenGL ES 2 demo with 200 shaded 3D objects, and it ran at 50fps. My suspicion is that many low-end and white-label devices may have unacceptable HTML5/WebGL support, which means there may be a large section of gamers you will not reach when you choose this as your platform. I've heard rumors about inconsistent performance of HTML5 and WebGL on different devices, but no clear picture emerges. I would like to hear if any of you have had similar experiences with HTML5 or WebGL, or whether I can find information about the percentage of devices I can expect to have decent performance.

    Read the article

  • Iphone/Android app – chatroom development – what framework & hosting needs?

    - by MikaelW
    I have some experience regarding IPhone and Android development but I am now struggling to solve a new class of problem: apps that involve a client/server chatroom feature. That is, an app when people can exchange text over the internet, and without having the app to constantly “pull” content from the server. So that problem can’t be solved with a normal php/mysql website, there must be some kind of application running on a server that is able to send message from the server to the phone, rather than having the phone to check for new messages every 10 seconds… So I’m looking for ways to solve the different problems here: What framework should I use on the two sides (phone / server)? It should be some kind of library that doesn’t prevent me to write paid apps. It should also be possible to have the same server for the Iphone and android version of the app. What server / hosting solution do I need with what sort of features, I just have no experience regarding server application that can handle and initiate multiple connections and are hosted on hardware that is always online I tried to find resources online but couldn’t so far, either the libraries had the wrong kind of license/language or I just didn’t understand… Sometimes there were nice tutorial but for different needs such as peer2peer chat over local network… Same with the server and the hosting problem, not sure where to start really, I’m calling for help and I promise I will complete this page with notes about the experience I will get :-) Obviously the ideal would be to find a tutorial I missed that include client code, server code and a free scalable server… That being said, If I see something as good, it probably means that I have eaten the wrong kind of mushroom again… So, failing that, any pointer which might help me toward that quest, would be greatly appreciated. Thanks in advance. Mikael

    Read the article

  • Textures quality issues with Libgdx

    - by user1876708
    I have drawn several vector objets and characters ( in Adobe Illustrator ) for my game on Android. They are all scalable at any size without any quality losses ( of course it's vector ^^ ). I have tried to simulate my gameboard directly on Illustrator just before setting my assests on libdgx to implement them in my game. I set all the objects at the good size, so that they fit perfectly on my XHDPI device I am running my test on. As you can see it works great ( for me at least ^^ ), the PNG quality is good for me, as expected ! So I have edited all my PNG at this size, set my assets on libgdx and build my game apk. And here is a screenshot of my gameboard ( don't pay attention at the differences of placing and objects, but check at the objets presents on both screenshot ). As you can see, I have a loss of my PNG quality in the game. It can be seen clealry on the hedgehog PNG, but also ( but not as obvious ) on the mushroom ( check at the outline ) and the hole PNG. If you really pay attention, on every objects, you can see pixels that are not visible on my first screenshot. And I just can't figure out why this is happening Oo If you have any ideas, you are very welcome ! Thanks. PS : You can check more clearly the 2 gameboard on this two links ( look at them at 100%, display at high resolution ) : Good quality link, from Illustrator Poor quality link, from the game Second phase of tests : We display an object ( the hedgehog ) on our main menu screen to see how it looks like. The things is that it looks like he is suppose to, which means, high quality with no pixels. The hedgehog PNG is coming from an atlas : layer.addActor(hedgehog); No loss of quality with this method So we think the problem is comming from the method we are using to display it on our gameboard : blocks[9][3] = new Block(TextureUtils.hedgehog, new Vector2(9, 3)); the block is getting the size from the vector we are associating to it, but we have a loss of quality with this method.

    Read the article

  • Podcast Show Notes: Old Habits Die Hard in the New SOA World

    - by OTN ArchBeat
    Like the previous series, the latest OTN ArchBeat Podcast program was also recorded in a hotel room just around the corner from Oracle OpenWorld in San Francisco just a few weeks ago. The gathered experts, all members of the OTN architect community, agreed to participate in an informal roundtable discussion of what's happening in Service Oriented Architecture. As you'll hear, the conversation ranged from the maturity of Service Oriented Architecture technology and tools, to the the lingering and typically self-imposed problems that can prevent organizations from realizing the full potential of SOA, to what SOA means in the era of *aaS, mobile computing, and big data. Hajo Normann, Torsten Winterberg, Ronald van Luttikhuizen, and Guido Schmutz (L-R) Hajo Normann, Torsten Winterberg, Danilo Schmeidel, and Lonneke Dikmans (L-R) The Panelists (Listed alphabetically) Lonneke Dikmans, Managing Partner at Vennster, Oracle ACE Director Ronald van Luttikuhuizen, Managing Partner at Vennster, Oracle ACE Director Hajo Normann, SOA & BPM Lead for ASG at Accenture, Oracle ACE Director Danilo Schmiedel, Solution Architect at Opitz Consulting Guido Schmutz, Technology Manager for SOA/BPM and Architecture Board at Trivadis, Oracle ACE Director Torsten Winterberg, Director of Strategy and Innnovation and head of SOA Competence Center at Opitz Consulting, Oracle ACE Director The Conversation Listen to Part 1: SOA technology and tools are mature, says this panel of experts, but why do some organizations still struggle to take full advantage of industrialized SOA? Listen to Part 2 (Nov 6): Human nature and a lack of trust among stakeholders can thwart successful SOA. Can a marketplace approach and social tools improve the situation? Listen to Part 3 (Nov 13): Do SOA stakeholders recognize the problems caused by poor communication among siloed service development teams? Coming Soon SOA and B2B: The authors of Getting Started with Oracle SOA B2B Integration: A Hands-On Tutorial discuss Business to Business capabilities in Oracle SOA Suite 11g. Be a Guest Producer for an ArchBeat Podcast Want to be a guest producer for an OTN ArchBeat podcast, put your topic and panelist suggestions in a comment on this post, or contact me at @OTNArchBeat.

    Read the article

  • Free Windows Store and Phone Developer Accounts for MSDN Subscribers

    - by Clint Edmonson
    If you are a member/subscriber to any of the following programs you are eligible to receive one-time, 12-month Windows Store and Windows Phone developer accounts.  Visual Studio Professional with MSDN Visual Studio Test Professional with MSDN Visual Studio Premium with MSDN Visual Studio Ultimate with MSDN BizSpark On September 11, 2012 Microsoft announced that Windows Store is open to individual developers (Company only registration became available on August 1st). This means that eligible MSDN subscribers will be able to select between an individual and company account when registering for their developer account benefit.   New or existing subscribers will see developer accounts listed as a benefit on the Getting Started page as well as various MSDN overview pages. Now that you have this benefit why not get started.  To activate this benefit, subscribers are provided with a unique token for each of the developer accounts. The tokens will work for both individual and company registration. To acquire and redeem the token: 1. Log into My Account. 2. Click on ‘Get Code’. A unique token will be delivered to each subscriber. 3. Click on ‘How to Register’ (link will appear once code is claimed). A developer account details page will display that includes an overview of the benefit, token and registration information. 4. Click on the link to ‘Register your code’.  This launches the developer account registration process. Ready to start developing?  Head over to Generation App to get started.

    Read the article

  • Can web apps allow fast data-typists to "type-ahead"?

    - by user61852
    In some data entry contexts, I've seen data typists, type really fast and know so well the app they use, and have a mechanic quality in their work so that they can "type ahead", ie continue typing and "tab-bing" and "enter-ing" faster than the display updates, so that in many occasions they are typing in the data for the next form before it draws itself. Then when this next entry form appears, their keystrokes fill the text boxes and they continue typing, selecting etc. In contexts like this, this speed is desirable, since this persons are really productive. I think this "type ahead of time" is only possible in desktop apps, but I may be wrong. My question is whether this way of handling the keyboard buffer (which in desktop apps require no extra programming) is achievable in web apps, or is this impossible because of the way web apps work, handle sessions, etc (network latency and the overhead of generating new web pages ) ? Edit: By "type ahead" I mean "keyboard type ahead" (typing faster than the next entry form can load), not suggets-as-you-type-like-google type ahead. Typeahead is a feature of computers and software (and some typewriters) that enables users to continue typing regardless of program or computer operation—the user may type in whatever speed he or she desires, and if the receiving software is busy at the time it will be called to handle this later. Often this means that keystrokes entered will not be displayed on the screen immediately. This programming technique for handling user what is known as a keyboard buffer.

    Read the article

  • How to talk a client out of a Flash website?

    - by bunglestink
    I have recently been doing a bunch of web side projects through word of mouth recommendations only. Although I am much more a of a programmer than a designer by any means, my design skills are not terrible, and do not hate dealing with UI like many programmers. As a result, I find myself lured into a bunch of side projects where aside from a minimal back end for content administration, most of the programming is on front end interfaces (read javascript/css). By far the biggest frustration I have had is convincing clients that they do not want Flash. Aside the fact that I really do not enjoy Flash "development", there are many practical reasons why Flash is not desirable (lack of compatibility across devices, decreased client accessibility, plug-in requirements, increased development time, etc.). Instead of just flat out telling the clients "I will not build you a flash website", I would much rather use tactics to convince/explain to them that this is not what they actually want, ie: meet their requirements any better than standard html/css/js and distract users from their content. What kind of first hand experience do others have with this? How do you explain to someone that javascript/css/AJAX is usually a better option for most websites? Why do people want to use Flash so bad to begin with? This question pertains to clients who do not have any technical reasons for wanting flash, but just want it because they think it makes pretty websites.

    Read the article

  • Ranking hit after WP site migration

    - by Ben
    I migrated my site from its old domain over a month ago. I followed WMT completely, including 301 redirects from every existing URL to the new domain, and then submitting a change of address. Traffic continued as normal, but then a few days after submitting the change of address traffic plummeted to about 20-30% of what it was previously. Most of my traffic come from organic search, and I can see that for the keywords I had targeted before and performed well with and am now ranking much much lower for. In some cases for low competition keywords I've only lost a few places, for higher competition terms I have really suffered. This has started to pick up a bit (one of my keywords I have risen from 195 to 100 in the last week), but it seems to be a very slow process. How seamless is this process normally? I was under the impression that this would not affect my rankings too severely, but it has now been a month since the move and recovery seems to be very slow, if at all. Is it likely that I've missed something? The only change is that I have moved what was the home page to be more of a sub-page, and now in its place is a magazine-style home page. I understand that links to the old site will now be pointing to the latter which means that rankings for some keywords attributed to the old home page will take a hit, but even on other pages that seem to fit in exactly the same page structure as the previous site I have seen a drop in rankings. Any help would be greatly appreciated. Thanks!

    Read the article

  • Which open source PHP project has the 'perfect' OOP design I can learn from?

    - by aditya menon
    I am a newbie to OOP, and I learn best by example. You could say this question is similar to Which Scala open source projects should I study to learn best coding practices - but in PHP. I have heard-tell that Symfony has the best 'architecture' (I will not pretend I know what that exactly means), as well as Doctrine ORM. Is it worth it to spend many months reading the source code of these projects, trying to deduce the patterns used and learning new tricks? I have seen equal number of web pages dissing and liking Zend's codebase (will provide links if deemed necessary). Do you know of any other project that would make any veteran OOP developer shed tears of joy? Please let me add that practicality and scope of use is not a concern at all here - I just want to do: Pick a project that has a codebase deemed awesome by devs way better and greater than me. Write code that achieves what the project does. Compare results and try to learn what I don't know. Basically, an academic interest codebase. Any recommendations please?

    Read the article

  • What are the risks of installing a "bad quality" package?

    - by ændrük
    When I try to install sonic-visualiser_1.9cc-1_amd64.deb via the Software Center the following warning message is displayed: The package is of bad quality The installation of a package which violates the quality standards isn't allowed. This could cause serious problems on your computer. Please contact the person or organisation who provided this package file and include the details beneath. Lintian check results for /home/ak/Downloads/sonic-visualiser_1.9cc-1_amd64.deb: Use of uninitialized value $ENV{"HOME"} in concatenation (.) or string at /usr/bin/lintian line 108. E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/bin/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/bin/sonic-visualiser 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/applications/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/applications/sonic-visualiser.desktop 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/sonic-visualiser/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/sonic-visualiser/CHANGELOG 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/sonic-visualiser/COPYING 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/doc/sonic-visualiser/README 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/mimelnk/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/mimelnk/application/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/mimelnk/application/x-sonicvisualiser-layer.desktop 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/mimelnk/application/x-sonicvisualiser.desktop 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/pixmaps/ 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/pixmaps/sv-icon-light.svg 1000/1000 E: sonic-visualiser: wrong-file-owner-uid-or-gid usr/share/pixmaps/sv-icon.svg 1000/1000 I understand that this means the package doesn't meet Debian policy and I know how to override the warning and install the package anyway. What are the risks of doing so?

    Read the article

  • Windows Azure Virtual Machines - Make Sure You Follow the Documentation

    - by BuckWoody
    To create a Windows Azure Infrastructure-as-a-Service Virtual Machine you have several options. You can simply select an image from a “Gallery” which includes Windows or Linux operating systems, or even a Windows Server with pre-installed software like SQL Server. One of the advantages to Windows Azure Virtual Machines is that it is stored in a standard Hyper-V format – with the base hard-disk as a VHD. That means you can move a Virtual Machine from on-premises to Windows Azure, and then move it back again. You can even use a simple series of PowerShell scripts to do the move, or automate it with other methods. And this then leads to another very interesting option for deploying systems: you can create a server VHD, configure it with the software you want, and then run the “SYSPREP” process on it. SYSPREP is a Windows utility that essentially strips the identity from a system, and when you re-start that system it asks a few details on what you want to call it and so on. By doing this, you can essentially create your own gallery of systems, either for testing, development servers, demo systems and more. You can learn more about how to do that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg465407.aspx   But there is a small issue you can run into that I wanted to make you aware of. Whenever you deploy a system to Windows Azure Virtual Machines, you must meet certain password complexity requirements. However, when you build the machine locally and SYSPREP it, you might not choose a strong password for the account you use to Remote Desktop to the machine. In that case, you might not be able to reach the system after you deploy it. Once again, the key here is reading through the instructions before you start. Check out the link I showed above, and this link: http://technet.microsoft.com/en-us/library/cc264456.aspx to make sure you understand what you want to deploy.  

    Read the article

  • The Internet of Things & Commerce: Part 3 -- Interview with Kristen J. Flanagan, Commerce Product Management

    - by Katrina Gosek, Director | Commerce Product Strategy-Oracle
    Internet of Things & Commerce Series: Part 3 (of 3) And now for the final installment my three part series on the Internet of Things & Commerce. Post one, “The Next 7,000 Days”, introduced the idea of the Internet of Things, followed by a second post interviewing one of our chief commerce innovation strategists, Brian Celenza.  This final post in the series is an interview with Kristen J. Flanagan, lead product manager for Oracle Commerce omnichannel strategy. She takes us through the past, present, and future of how our Commerce Solution is re-imagining the way physical and digital shopping come together. ------- QUESTION: It’s your job to stay on top of what our customers’ need to not only run their online businesses effectively, but also to make sure they have product capabilities they can innovate and grow on. What key trend has been top-of-mind for you and our customers around this collision of physical and digital shopping? Kristen: I’ll agree with Brian Celenza that hands down mobile has forced a major disruption in shopping and selling behavior. A few years ago, mobile exploded at a pace I don't think anyone was expecting. Early on, we saw our customers scrambling to establish a mobile presence---mostly through "screen scraping" technologies. As smartphones continued to advance (at lightening speed!), our customers started to investigate ways to truly tap in to their eCommerce capabilities to deliver the mobile experience. They started looking to us for a means of using the eCommerce services and capabilities to deliver a mobile experience that is tailored for mobile rather than the desktop experience on a smaller screen. In the future, I think we'll see customers starting to really understand what their shoppers need and expect from a mobile offering and how they can adapt their content and delivery of that content to meet those needs. And, mobile shopping doesn’t stop at the consumer / buyer. Because the in-store experience is compelling and has advantages that digital just can't offer, we're also starting to see the eCommerce services being leveraged for mobile for in-store sales associates. Brick-and-mortar retailers are interested in putting the omnichannel product catalog, promotions, and cart into the hands of knowledgeable associates. Retailers are now looking to connect and harness the eCommerce data in-store so that shoppers have a reason to walk-in. I think we'll be seeing a lot more customers thinking about melding the in-store and digital experiences to present a richer offering for shoppers.    QUESTION: What are some examples of what our customers are doing currently to bring these concepts to reality? Kristen: Well, without question, connecting digital and brick-and-mortar worlds is becoming tablestakes for selling experiences. If a brand has a foot in both worlds (i.e., isn’t a pureplay online retailer), they have to connect the dots because shoppers – whether consumers or B2B buyers –don't think in clearly defined channels anymore. The expectation is connectedness – for on- and offline experiences, promotions, products, and customer data. What does this mean practically for businesses selling goods on- and offline? It touches a lot of systems: inventory info on the eCommerce site, fulfillment options across channels (buy online/pickup in store), order information (representing various channels for a cohesive view of shopper order history), promotions across digital and store, etc.  A few years ago, the main link between store and digital was the smartphone. We all remember when “apps” became a thing and many of our customers were scrambling to get a native app out there. Now we're seeing more strategic thinking around the benefits of mobile web vs. native and how that ties in to the purpose and role of mobile within the digital channel. Put it more broadly, how these pieces fit together in the overall brand puzzle.  The same could be said for “showrooming.” Where it was a major concern (i.e., shoppers using stores to look at merchandise and then order online from Amazon), in recent months, it’s emerged that the inverse is now becoming a a reality as well. "Webrooming" (using digital sites to do research before making a purchase in the store) is a new behavior pure play retailers are challenged with. There are many technologies, behaviors, and information that need to tie together to offer a holistic omnichannel shopping experience. As a result, brands are looking for ways to connect the digital and in-store experiences to bridge the gaps: shared assortments across channels, assisted selling apps that arm associates with information about shoppers, shared promotions, inventory, etc. QUESTION: How has Oracle Commerce been built to help brands make the link between in-store and digital over the last few years? Kristen: Over the last seven years, the product has been in step with the changes in industry needs. Here is a brief history of the evolution: Prior to Oracle’s acquisition of ATG and Endeca, key investments were made to cross-channel functionality that we are still building on today. Commerce Service Center (v2007.1) ATG introduced the Commerce Service Center in 2007.1 and marked the first entry into what was then called “cross-channel.” The Commerce Service Center is a call-center-agent-facing application that enables agents to see shopper orders, online catalog, promotions, and pricing. It is tightly integrated with the eCommerce capabilities of the platform and commerce engine and provided a means of connecting data from the call center and online channels.  REST services framework (v9.1)  In v9.1 we introduced the REST services framework and interface in the Platform that enabled customers to use ATG web services in other applications. This framework has become the basis for our subsequent omni-channel features and functionality. Multisite Architecture (v10) With the v10 release, we introduced the Multisite Architecture, which enabled customers to manage multiple sites (and channels) within a single instance of the BCC. Customers could create site- and channel-specific catalogs, promotions, targeters, and scenarios. Endeca Page Builder (2.x) / Experience Manager (3.x) With the introduction of Endeca for Mobile (now part of the core platform, available through the reference store – see blow) on top of Page Builder (and then eventually Experience Manager), Endeca gave business users the tools to create and manage native and mobile web applications. And since the acquisition of both ATG (2011) and Endeca (2012), Oracle Commerce has leveraged the best of each leading technology’s capabilities for omnichannel commerce to continue to drive innovation for our customers. Service enablement of core Oracle Commerce capabilities (v10.1.1, 10.2, & 11) After the establishment of the REST services framework and interface, we followed up in subsequent releases with service enablement of core Oracle Commerce capabilities throughout the iOS native app and the enablement of the core Commerce Service Center features. The result is that customers can leverage these services for their integrations with other systems, as well as their omnichannel initiatives.  Mobile web reference application (v10.1) In 10.1 we introduced the shopper-facing mobile reference application that showed how to use Oracle Commerce to deliver a mobile web experience for shoppers. This included the use of Experience Manager and cartridges to drive those experiences on select pages.  Native (iOS) reference application (v10.1.1)  We came out with the 10.1.1 shopper-facing native iOS ref app that illustrated how to use the Commerce REST services to deliver an iOS app. Also included Experience Manager-driven pages.   Assisted Selling reference application (v10.2.1)  The Assisted Selling reference application is our first reference application designed for the in-store associate. This iOS app shows customers how they can use Oracle Commerce data and information to provide a high-touch, consultative sales environment as well as to put the endless aisle into hands of their associates. Shoppers can start a cart online, and in-store associates can access that cart via the application to provide more information or add products and then transact using the ATG engine. Support for Retail promotions (v11) As part of the v11 release, we worked with teams in the Oracle Retail Global Business Unit (RGBU) to assess which promotion types and capabilities are supported across our products. Those products included Oracle Commerce, Oracle Point of Service (ORPOS), and Oracle Retail Price Management (RPM). The result is that customers can now more easily support omnichannel use cases between the store and digital.  Making sure Oracle Commerce can help support the omnichannel needs of our customers is core to our product strategy. With 89% of consumers now use two or more channels to make a single purchase, ensuring that cross-channel interactions are linked is critical to a great customer experience – and to sales. As Oracle Commerce evolves, we want to make it simple for organizations to create, deliver, and scale experiences across touchpoints with our create once, deploy commerce anywhere framework. We have a flexible, services-oriented architecture that allows data, content, catalogs, cart, experiences, personalization, and merchandising to be shared across touchpoints and easily extended in to new environments like mobile, social, in-store, Call Center, and new Websites. [For the latest downloads and Oracle Commerce documentation, please visit the Oracle Technical Network.] ------ Thank you to both Brian and Kristen for their contributions and to this blog series and their continued thought leadership for Oracle Commerce. We are all looking forward to the coming years of months of new shopping behaviors and opportunities to innovate. Because – if the digital fabric of our everyday lives continues to change at the same pace – the next five years (that just under 2,000 days), will be dramatic. ---------- THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT

    Read the article

  • Strange if-else branching behavior in a fragment shader

    - by Winged
    In my fragment shader I have passed an uniform int uLightType variable, which indicates what type of light is in usage right now. The problem is that if-else branching does not work correctly - the fragment shader performs instructions in every if statement block. if (uLightType == 1) { // Spotlight light type vec3 depthTextureCoord = vDepthPosition.xyz / vDepthPosition.w; shadowDepth = unpack(texture2D(uDepthMapSampler, depthTextureCoord.xy)); } else if (uLightType == 2) { // Omni-directional light type shadowDepth = unpack(textureCube(uDepthCubemapSampler, -lightVec)); } In the case when uLightType equals 1, unless I comment out the content of the second if block, it assigns an another value to shadowDepth. Also while uLightType equals 1, when I remove the second 'if' block and change == to != like in the sample code below, nothing happens (which means that uLightType really equals 1). if (uLightType != 1) { // Spotlight light type vec3 depthTextureCoord = vDepthPosition.xyz / vDepthPosition.w; shadowDepth = unpack(texture2D(uDepthMapSampler, depthTextureCoord.xy)); } Also, when I manually create an int variable (which is not an uniform) like this: var lightType = 1; and replace uLightType with it in the if-else branch, everything works fine, so I guess it have something to do with the fact that uLightType is the uniform.

    Read the article

  • OSB and Ubuntu 10.04 - Too Many Open Files

    - by jeff.x.davies
    When installing the latest Oracle Service Bus (11gR1PS3) onto my Ubuntu 10.04 system, the Eclipse IDE was complaining about there being too many open files. The Oracle Service Bus and the Oracle Enterprise Pack for Eclipse (aka OEPE) do make use of ALOT of files. By default, Ubuntu will restrict each user to 1024 open files. A much more realistic number for OSB development is 4096. Changing the file limit in Ubuntu is fairly simple (if arcane). You will need to modify two different files and then restart your server. First, you need to modify the limits.conf file as the root user. Open a terminal window and enter the following command: sudo gedit /etc/security/limits.conf Add the following 2 lines to the file. The asterisk simply means that the rule will apply to all users. * soft nofile 4096 * hard nofile 4096 Save your changes and close gedit. The second file to change is the common-session file. Use the following command: sudo gedit /etc/pam.d/common-session Add the following line: session required pam_limits.so Save the file and exit gedit. Restart your machine. You shouldn't have any more problems with too many open files anymore.

    Read the article

  • What is the rationale behind Apache Jena's *everything is an interface if possible* design philosophy?

    - by David Cowden
    If you are familiar with the Java RDF and OWL engine Jena, then you have run across their philosophy that everything should be specified as an interface when possible. This means that a Resource, Statement, RDFNode, Property, and even the RDF Model, etc., are, contrary to what you might first think, Interfaces instead of concrete classes. This leads to the use of Factories quite often. Since you can't instantiate a Property or Model, you must have something else do it for you --the Factory design pattern. My question, then, is, what is the reasoning behind using this pattern as opposed to a traditional class hierarchy system? It is often perfectly viable to use either one. For example, if I want a memory backed Model instead of a database-backed Model I could just instantiate those classes, I don't need ask a Factory to give me one. As an aside, I'm in the process of writing a library for manipulating Pearltrees data, which is exported from their website in the form of an RDF/XML document. As I write this library, I have many options for defining the relationships present in the Peartrees data. What is nice about the Pearltrees data is that it has a very logical class system: A tree is made up of pearls, which can be either Page, Reference, Alias, or Root pearls. My question comes from trying to figure out if I should adopt the Jena philosophy in my library which uses Jena, or if I should disregard it, pick my own design philosophy, and stick with it.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >