Search Results

Search found 1989 results on 80 pages for 'webkit notifications'.

Page 72/80 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • The Apple iPad &ndash; I&rsquo;m gonna get it!

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Well, heck, here comes another non-techie blogpost. You know I’m a geek, so I love gadgets! I found it RATHER interesting to see all the negative news on the blogosphere about the iPad. The main bitch points are - No Multi tasking No Flash Just a bigger iPhone. So here’s the deal! My view is, the above 3 are EXACTLY what I had personally hoped for in the Apple iPad. Before the release, I had gone on the record saying - “If the Apple Tablet is able to run full fledged iTunes (so I can get rid of iTunes on my desktop, I don’t like iTunes on Windows), can browse the net, can read PDFs, and will be under $1000, I’ll buy it”. Well, so, the released iPad wasn’t exactly like my dream tablet. The biggest downer IMO was it’s inability to run full iTunes. But, really, in retrospect, I like the newly released iPad. And here is why. No Multi tasking and No Flash, means much better battery life. Frankly, I rarely multi task on my laptop/desktop .. yeah I know my OS does .. but ME – I don’t multi task, and I don’t think you do either!! As I type this blogpost, I have a few windows running behind the scenes, but they are simply waiting for me to get back to them. The only thing truly running and I am making use of, other than this blogost, is media player playing some music – which the iPad can do. Also, I am logged into IM/Email – which again, iPad can do via notifications. It does the limited multitasking I need, without chewing down on batteries. Smart thinking, precisely the reason I love the iPhone. I don’t want a bulky battery consuming machine. Lack of flash? Okay sure, I can’t see Hulu on my iPad. That’s some loss. I can see youtube. Also, per Adobe I can’t see some porn sites, which I don’t want to see on my iPad. But, Flash is heavy. Especially flash video. My dream is to see silverlight run on the iPhone and iPad. No flash = not such a big loss. Speaking of battery life – 10 hours is plenty. I haven’t been away from electricity for that long usually, so I’m okay with charging it up when it runs low. It’s really not such a big deal honestly. Finally, eBook functionality – wow! I went on the record saying, eBook readers are not for me, but seriously, the iPad is perfect for my eBook needs at least. And as far it being just a bigger iPhone? I’ve always wanted a bigger iPhone, precisely for the eBook reading experience. I love my iPhone, I love the apps on it. The only thing that sucks about the iPhone is battery life, but other than that, it is the best gadget I have ever bought! And something that runs on mobile chips, is that thin, and those newly written apps .. mail, calendar .. I am very very excited to get my iPad, which will be the 64gig 3G version. The biggest plus in an iPad ……… no contract on data. I am *hoping*, this means that I can buy a SIM card in Europe, and use the iPad here. That would be killer awesome! But hey, if I had to pick downers in the iPad, they would be - - I wish they had a 128G Version. Now that we have a good video viewing machine, I know I’d chew up space quickly.- Sync over WIFI, seriously Apple.  Both for iPhone and iPad.- 3 month wait!!- Existing iPhone users should get a discount on the iPad data plan. Comment on the article ....

    Read the article

  • 7 Steps To Cut Recruiting Costs & Drive Exceptional Business Results

    - by Oracle Accelerate for Midsize Companies
    By Steve Viarengo, Vice President Product Management, Oracle Taleo Cloud Services  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 In good times, trimming operational costs is an ongoing goal. In tough times, it’s a necessity. In both good times and bad, however, recruiting occurs. Growth increases headcount in good times, and opportunistic or replacement hiring occurs in slow business cycles. By employing creative recruiting strategies in tandem with the latest technology developments, you can reduce recruiting costs while driving exceptional business results. Here are some critical areas to focus on. 1.  Target Direct Cost Savings Total recruiting process expenses are the sum of external costs plus internal labor costs. Most organizations can reduce recruiting expenses with direct cost savings. While additional savings on indirect costs can be realized from process improvement and efficiency gains, there are direct cost savings and benefits readily available in three broad areas: sourcing, assessments, and green recruiting. 2. Sourcing: Reduce Agency Costs Agency search firm fees can amount to 35 percent of a new employee’s annual base salary. Typically taken from the hiring department budget, these fees may not be visible to HR. By relying on internal mobility programs, referrals, candidate pipelines, and corporate career Websites, organizations can reduce or eliminate this agency spend. And when you do have to pay third-party agency fees, you can optimize the value you receive by collaborating with agencies to identify referred candidates, ensure access to candidate data and history, and receive automatic notifications and correspondence. 3. Sourcing: Reduce Advertising Costs You can realize significant cost reductions by placing all job positions on your corporate career Website. This will allow you to reap a substantial number of candidates at minimal cost compared to job boards and other sourcing options. 4.  Sourcing: Internal Talent Pool Internal talent pools provide a way to reduce sourcing and advertising costs while delivering improved productivity and retention. Internal redeployment reduces costs and ramp-up time while increasing retention and employee satisfaction. 5.  Sourcing: External Talent Pool Strategic recruiting requires identifying and matching people with a given set of skills to a particular job while efficiently allocating sourcing expenditures. By using an e-recruiting system (which drives external talent pool management) with a candidate relationship database, you can automate prescreening and candidate matching while communicating with targeted candidates. Candidate relationship management can lower sourcing costs by marketing new job opportunities to candidates sourced in the past. By mining the talent pool in this fashion, you eliminate the need to source a new pool of candidates for each new requisition. Managing and mining the corporate candidate database can reduce the sourcing cost per candidate by as much as 50 percent. 6.  Assessments: Reduce Turnover Costs By taking advantage of assessments during the recruitment process, you can achieve a range of benefits, including better productivity, superior candidate performance, and lower turnover (providing considerable savings). Assessments also save recruiter and hiring manager time by focusing on a short list of qualified candidates. Hired for fit, such candidates tend to stay with the organization and produce quality work—ultimately driving revenue.  7. Green Recruiting: Reduce Paper and Processing Costs You can reduce recruiting costs by automating the process—and making it green. A paperless process informs candidates that you’re dedicated to green recruiting. It also leads to direct cost savings. E-recruiting reduces energy use and pollution associated with manufacturing, transporting, and recycling paper products. And process automation saves energy in mailing, storage, handling, filing, and reporting tasks. Direct cost savings come from reduced paperwork related to résumés, advertising, and onboarding. Improving the recruiting process through sourcing, assessments, and green recruiting not only saves costs. It also positions the company to improve the talent base during the recession while retaining the ability to grow appropriately in recovery. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Key Windows Phone Development Concepts

    - by Tim Murphy
    As I am doing more development in and out of the enterprise arena for Windows Phone I decide I would study for the 70-599 test.  I generally take certification tests as a way to force me to dig deeper into a technology.  Between the development and studying I decided it would be good to put a post together of key development features in Windows Phone 7 environment.  Contrary to popular belief the launch of Windows Phone 8 will not obsolete Windows Phone 7 development.  With the launch of 7.8 coming shortly and people who will remain on 7.X for the foreseeable future there are still consumers needing these apps so don’t throw out the baby with the bath water. PhoneApplicationService This is a class that every Windows Phone developer needs to become familiar with.  When it comes to application state this is your go to repository.  It also contains events that help with management of your application’s lifecycle.  You can access it like the following code sample. 1: PhoneApplicationService.Current.State["ValidUser"] = userResult; DeviceNetworkInformation This class allows you to determine the connectivity of the device and be notified when something changes with that connectivity.  If you are making web service calls you will want to check here before firing off. I have found that this class doesn’t actually work very well for determining if you have internet access.  You are better of using the following code where IsConnectedToInternet is an App level property. private void Application_Launching(object sender, LaunchingEventArgs e){ // Validate user access if (Microsoft.Phone.Net.NetworkInformation.NetworkInterface.NetworkInterfaceType != Microsoft.Phone.Net.NetworkInformation.NetworkInterfaceType.None) { IsConnectedToInternet = true; } else { IsConnectedToInternet = false; } NetworkChange.NetworkAddressChanged += new NetworkAddressChangedEventHandler(NetworkChange_NetworkAddressChanged);}void NetworkChange_NetworkAddressChanged(object sender, EventArgs e){ IsConnectedToInternet = (Microsoft.Phone.Net.NetworkInformation.NetworkInterface.NetworkInterfaceType != Microsoft.Phone.Net.NetworkInformation.NetworkInterfaceType.None);} Push Notification Push notification allows your application to receive notifications in a way that reduces the application’s power needs. This MSDN article is a good place to get the basics of push notification, but you can see the essential concept in the diagram below.  There are three types of push notification: toast, Tile and raw.  The first two work regardless of the state of the application where as raw messages are discarded if your application is not running.   Live Tiles Live tiles are one of the main differentiators of the Windows Phone platform.  They allow users to find information at a glance from their start screen without navigating into individual apps.  Knowing how to implement them can be a great boost to the attractiveness of your application. The simplest step-by-step explanation for creating live tiles is here. Local Database While your application really only has Isolated Storage as a data store there are some ways of giving you database functionality to develop against.  There are a number of open source ORM style solutions.  Probably the best and most native way I have found is to use LINQ to SQL.  It does take a significant amount of setup, but the ease of use once it is configured is worth the cost.  Rather than repeat the full concepts here I will point you to a post that I wrote previously. Tasks (Bing, Email) Leveraging built in features of the Windows Phone platform is an easy way to add functionality that would be expensive to develop on your own.  The classes that you need to make yourself familiar with are BingMapsDirectionsTask and EmailComposeTask.  This will allow your application to supply directions and give the user an email path to relay information to friends and associates. Event model Because of the ability for users to switch quickly to switch to other apps or the home screen is just one reason why knowing the Windows Phone event model is important.  You need to be able to save data so that if a user gets a phone call they can come back to exactly where they were in your application.  This means that you will need to handle such events as Launching, Activated, Deactivated and Closing at an application level.  You will probably also want to get familiar with the OnNavigatedTo and OnNavigatedFrom events at the page level.  These will give you an opportunity to save data as a user navigates through your app. Summary This is just a small portion of the concepts that you will use while building Windows Phone apps, but these are some of the most critical.  With the launch of Windows Phone 8 this list will probably expand.  Take the time to investigate these topics further and try them out in your apps. del.icio.us Tags: Windows Phone 7,Windows Phone,WP7,Software Development,70-599

    Read the article

  • Windows 8 for productivity?

    - by Charles Young
    At long last I’ve started using Windows 8.  I boot from a VHD on which I have installed Office, Visio, Visual Studio, SQL Server, etc.  For a week, now, I’ve been happily writing code and documents and using Visio and PowerPoint.  I am, very much, a ‘productivity’ user rather than a content consumer.   I spend my days flitting between countless windows and browser tabs displayed across dual monitors.  I need to access a lot of different functionality and information in as fluid a fashion as possible. With that in mind, and like so many others, I was worried about Windows 8.  The Metro interface is primarily about content consumption on touch-enabled screens, and not really geared for people like me sitting in front of an 8-core non-touch laptop and an additional Samsung monitor.  I still use a mouse, not my finger.  And I create more than I consume. Clearly, Windows 8 won’t be viable for people like me unless Metro keeps out of my hair when using productivity and development tools.  With this in mind, I had long expected Microsoft to provide some mechanism for switching Metro off.  There was a registry hack in last year’s Developer Preview, but this capability has been removed.   That’s brave.  So, how have things worked out so far? Well, I am really quite surprised.  When I played with the Developer Preview last year, it was clear that Metro was unfinished and didn’t play well enough with the desktop.  Obviously I expected things to improve, but the context switching from desktop to full-screen seemed a heavy burden to place on users.  That sense of abrupt change hasn’t entirely gone away (how could it), but after a few days, I can’t say that I find it burdensome or irritating.   I’ve got used very quickly to ‘gesturing’ with my mouse at the bottom or top right corners of the screen to move between applications, using the Windows key to toggle the Start screen and generally finding my way around.   I am surprised at how effective the Start screen is, given the rather basic grouping features it provides.  Of course, I had to take control of it and sort things the way I want.  If anything, though, the Start screen provides a better navigation and application launcher tool than the old Start menu. What I didn’t expect was the way that Metro enhances the productivity story.  As I write this, I’ve got my desktop open with a maximised Word window.  However, the desktop extends only across about 85% of the width of my screen.  On the left hand side, I have a column that displays the new Metro email client.  This is currently showing me a list of emails for my main work account.  I can flip easily between different accounts and read my email within that same column.  As I work on documents, I want to be able to monitor my inbox with a quick glance. The desktop, of course, has its own snap feature.  I could run the desktop full screen and bring up Outlook and Word side by side.  However, this doesn’t begin to approach the convenience of snapping the Metro email client.  Consider that when I snap a window on the desktop, it initially takes up 50% of the screen.  Outlook doesn’t really know anything about snap, and doesn’t adjust to make effective use of the limited screen estate.  Even at 50% screen width, it is difficult to use, so forget about trying to use it in a Metro fashion. In any case, I am left with the prospect of having to manually adjust everything to view my email effectively alongside Word.  Worse, there is nothing stopping another window from overlapping and obscuring my email.  It becomes a struggle to keep sight of email as it arrives.  Of course, there is always ‘toast’ to notify me when things arrive, but if Outlook is obscured, this just feels intrusive. The beauty of the Metro snap feature is that my email reader now exists outside of my desktop.   The Metro app has been crafted to work well in the fixed width column as well as in full-screen.  It cannot be obscured by overlapping windows.  I still get notifications if I wish.  More importantly, it is clear that careful attention has been given to how things work when moving between applications when ‘snapped’.  If I decide, say to flick over to the Metro newsreader to catch up with current affairs, my desktop, rather than my email client, obligingly makes way for the reader.  With a simple gesture and click, or alternatively by pressing Windows-Tab, my desktop reappears. Another pleasant surprise is the way Windows 8 handles dual monitors.  It’s not just the fact that both screens now display the desktop task bar.  It’s that I can so easily move between Metro and the desktop on either screen.  I can only have Metro on one screen at a time which makes entire sense given the ‘full-screen’ nature of Metro apps.  Using dual monitors feels smoother and easier than previous versions of Windows. Overall then, I’m enjoying the Windows 8 improvements.  Strangely, for all the hype (“Windows reimagined”, etc.), my perception as a ‘productivity’ user is more one of evolution than revolution.  It all feels very familiar, but just better.

    Read the article

  • GCM: onMessage() from GCMIntentService is never called [migrated]

    - by Shrikant
    I am implementing GCM (Google Cloud Messaging- PUSH Notifications) in my application. I have followed all the steps given in GCM tutorial from developer.android.com My application's build target is pointing to Goolge API 8 (Android 2.2 version). I am able to get the register ID from GCM successfully, and I am passing this ID to my application server. So the registration step is performed successfully. Now when my application server sends a PUSH message to my device, the server gets the message as SUCCESS=1 FAILURE=0, etc., i.e. Server is sending message successfully, but my device never receives the message. After searching alot about this, I came to know that GCM pushes messages on port number 5228, 5229 or 5230. Initially, my device and laptop was restricted for some websites, but then I was granted all the permissions to access all websites, so I guess these port numbers are open for my device. So my question is: I never receive any PUSH message from GCM. My onMessage() from GCMIntenService class is never called. What could be the reason? Please see my following code and guide me accordingly: I have declared following in my manifest: <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="8" /> <permission android:name="package.permission.C2D_MESSAGE" android:protectionLevel="signature" /> <!-- App receives GCM messages. --> <uses-permission android:name="com.google.android.c2dm.permission.RECEIVE" /> <!-- GCM connects to Google Services. --> <uses-permission android:name="android.permission.INTERNET" /> <!-- GCM requires a Google account. --> <uses-permission android:name="android.permission.GET_ACCOUNTS" /> <!-- Keeps the processor from sleeping when a message is received. --> <uses-permission android:name="android.permission.WAKE_LOCK" /> <uses-permission android:name="package.permission.C2D_MESSAGE" /> <uses-permission android:name="android.permission.INTERNET" /> <receiver android:name="com.google.android.gcm.GCMBroadcastReceiver" android:permission="com.google.android.c2dm.permission.SEND" > <intent-filter> <action android:name="com.google.android.c2dm.intent.RECEIVE" /> <action android:name="com.google.android.c2dm.intent.REGISTRATION" /> <category android:name="packageName" /> </intent-filter> </receiver> <receiver android:name=".ReceiveBroadcast" android:exported="false" > <intent-filter> <action android:name="GCM_RECEIVED_ACTION" /> </intent-filter> </receiver> <service android:name=".GCMIntentService" /> /** * @author Shrikant. * */ public class GCMIntentService extends GCMBaseIntentService { /** * The Sender ID used for GCM. */ public static final String SENDER_ID = "myProjectID"; /** * This field is used to call Web-Service for GCM. */ SendUserCredentialsGCM sendUserCredentialsGCM = null; public GCMIntentService() { super(SENDER_ID); sendUserCredentialsGCM = new SendUserCredentialsGCM(); } @Override protected void onRegistered(Context arg0, String registrationId) { Log.i(TAG, "Device registered: regId = " + registrationId); sendUserCredentialsGCM.sendRegistrationID(registrationId); } @Override protected void onUnregistered(Context context, String arg1) { Log.i(TAG, "unregistered = " + arg1); sendUserCredentialsGCM .unregisterFromGCM(LoginActivity.API_OR_BROWSER_KEY); } @Override protected void onMessage(Context context, Intent intent) { Log.e("GCM MESSAGE", "Message Recieved!!!"); String message = intent.getStringExtra("message"); if (message == null) { Log.e("NULL MESSAGE", "Message Not Recieved!!!"); } else { Log.i(TAG, "new message= " + message); sendGCMIntent(context, message); } } private void sendGCMIntent(Context context, String message) { Intent broadcastIntent = new Intent(); broadcastIntent.setAction("GCM_RECEIVED_ACTION"); broadcastIntent.putExtra("gcm", message); context.sendBroadcast(broadcastIntent); } @Override protected void onError(Context context, String errorId) { Log.e(TAG, "Received error: " + errorId); Toast.makeText(context, "PUSH Notification failed.", Toast.LENGTH_LONG) .show(); } @Override protected boolean onRecoverableError(Context context, String errorId) { return super.onRecoverableError(context, errorId); } }

    Read the article

  • Chrome does not re-draw properly on Windows 8

    - by Akshat Mittal
    There are a lot of problems with Chrome (24.0.1312.14 beta || But all this happened before update also) on Windows 8. Problems and explanations are listed below: Google Chrome re-draw time: When I switch tabs, the window retains the content of the previous tab and displays that even if I move my mouse, if only refreshes (re-draws) when there is a change on the webpage (like on hover) or I do a select all (or scroll). One thing to note is that the hover and select happens on the real page and not the retained image-like thing of the older webpage. Chrome is slow and laggy: Websites such as Facebook and Twitter (and more) have gone extremely laggy on Chrome (Win 8). When I was using Windows 7, I never experienced a lag or something. Also when using HTML-5 Websites, the transition (the -webkit-transition in CSS) goes extremely slow at times. Plugins Crash: Plugins like Flash Player, Shockwave Player, and more that are in-built into Chrome Crashes a lot, even when doing simple tasks like playing YouTube Videos, displaying ads or something. Chrome Crashes: Chrome has crashed over 100 times in the past month. Google Chrome just crashes randomly or I don't know the reason. Random Page crashes: Chrome results chrome://crash/(Copy-Paste this in address bar) on random pages even when the page is just loaded, I understand that this can happen on heavy HTML5 or JS websites but what about HTML only websites? Computer Freeze: Chrome sometimes, randomly, freezes my computer. Freeze in the sense, none of the other apps are also working. It's like the whole system freezes, I can not even switch to other apps. I am sure that this is because of Chrome since this happens only when Chrome is active. Most of the things above happens on Super User also, Super User never had any problem when using Chrome on Windows 7. UPDATE 1: @magicandre1981 Commented for trying to disable Hardware Acceleration. I tried it, it somewhat solved the problem but din't fix it. I am still experiencing all the above issues but less frequently (maybe because Chrome Restarted Completely) UPDATE 2: @avirk asked me to try a Stable Version of Chrome and Firefox, I din't experience any lag in Firefox, a little (negligible) lag in Chrome 22 (Maybe because its a new copy of Chrome, I haven't used it much). UPDATE 3: @NothingsImpossible said that He is also experiencing the same problem on Windows Server 2008! This seams to be a major issue now. He also said that GPU load is also high at the same time! Even I saw the same thing. UPDATE 4: Recently, Chrome updated to v24 Stable (I am using stable from a long time now). I was experiencing this problem a lot less in Chrome 23, but this is back in Chrome 24. Seams like Chrome 24 is the most affected from this bug, as this same problem was high in Chrome 24 beta also. UPDATE 5: Chrome was updated to v25 Stable. This problem is 99% Gone, it is still there in 1% of the cases. One such example is when I leave chrome inactive for a while with a few tabs open, the tabs go black and no activity can get them back to active state. If I open a new tab, the new tab is OK but the others are still black, I need to close all those tabs. UPDATE 6: Chrome updated to v27 Stable channel, this problem is nearly gone. This does happen occasionally, but not as frequent as in earlier versions of Chrome. UPDATE 7: I am on Chrome v35.0.1916.114 Stable, Windows 8.1 Pro Update 1. Some of the other problems appears to be back. Chrome is slow and laggy again. Re-draw time is getting worse. Is anybody else experiencing such issues? Does anybody have a solution to any of these?

    Read the article

  • How can I tell Firefox to ignore unprintable characters?

    - by BrianH
    Edit: Summary Apparently the intended character to display in this case is an "en-dash". This page has a table half way down that shows that for the &ndash;, some software will convert the correct hex code of 2013 to 0096. (look at the first row in the table). This answer on Stackoverflow explains that somehow this is a mixup between Windows-1252 and UTF-8 This blog article enforces this: Character 150 (0x96) is the unicode character "START OF GUARDED AREA" in the non-displayed C1 control character range, but in the Windows-1252 encoding it's mapped to to the displayable character 0x2013 "en-dash" (a short dash). Others have struggled with this when producing content, as this answer on Stackoverflow shows how to replace 0x0096 with 0x2013. Google must realize this, because as stated in my original question below, Google's cached version of the Amazon page has &ndash; so it seems they are automatically correcting these mistakes on pages they cache. I have tried setting my encoding to Windows-1252 but that does not help. So now I guess my question is, how can I tell Firefox to ignore unprintable characters like these? Original content below: (Firefox 3.6.13 on Windows XP) Every once in a while I notice an odd character on certain web pages when browsing the web. It is a outline of a box with a 4-digit number inside. And example of a page that has these characters is: http://aws.amazon.com/ec2/#highlights After each section heading (Elastic, Completely Controlled, ...) I see a box with the number "0096" inside. I looked at the cached version on Google, and google has &ndash; in it's place, so I'm guessing I should be seeing a dash there instead of the box with the numbers in it. I have tried changing the character encoding in Firefox but haven't been able to find one that shows these characters correctly. Is there a way to allow Firefox to view these characters? Thanks in advance! Edit - adding a screen shot of the "special" characters: Edit #2 - tried in Ubuntu - new screenshots I logged into my Ubuntu desktop and browsed to the amazon page in Chrome and Firefox. Chrome completely ignores character, even if I inspect or view page source. Firefox in Unbutu displays the character exactly like Firefox on my Windows XP box. I copied the character and played around with it at the command line - here is a screenshot of the results: It looks like I can paste the character into this post as well: `` It is definitely not isolated to Windows XP. I tried setting the character encoding for my terminal to Windows 1252 (from Dennis' comment below), but then it just displays this character as a question mark. I pulled the webpage down with wget and with curl, and both outputs show this characters as: <96> It makes me wonder if this character renders correctly for anyone? It appears webkit just ignores it, my IE6 ignores it, Firefox displays the box with the numbers in it. I would have to imagine the design team at Amazon can see it correctly? It's not a huge deal to get these characters displaying correctly, but it would be nice to know if there is a solution to this.

    Read the article

  • The Windows Browser Ballot Screen Offers Web Browser Choice to European Users

    - by Matthew Guay
    Since March, our friends across the pond in Europe get to decide which browser they want to install with their Windows OS. Today we thought we would take a look at the ballot choices, some are well known, and others you may not have heard of. Windows users in European countries should start seeing the so called “Browser Ballot Screen” after installing the Windows Update KB976002 (link below). The browser ballot offers a dozen different browsers, including some you’ve likely never heard of.  They each have some unique features, and are all free, and here we take a quick look at each of them. Internet Explorer 8 Internet Explorer is the world’s most used web browser, as it’s bundled with Windows. It also includes several unique features, including Accelerators that make it easy to search or find a map of a location, and InPrivate filtering to directly control what sites can get personal information.  Additionally, it offers great integration with Windows Touch and the new taskbar in Windows 7. IE 8 runs on Windows XP and newer, and is bundled with Windows 7. Mozilla Firefox 3.6 Firefox is the most popular browser other than Internet Explorer.  It is the modern descendant of Netscape, and is loved by web developers for its adherence to web standards, openness, and expandability.  It offers thousands of Add-ons and themes to let you customize it to fit your preferences. The most recent version has added Personas, which are quick, lightweight themes to let you personalize the look your browser. It’s open source, and runs on all modern versions of Windows, Mac OS X, and Linux. Of course thanks to Asian Angel, our resident browser expert, you can check out several articles regarding this popular IE alternative. Google Chrome 4 Google Chrome has gained an impressive amount of market share during its short time in the market. It offers a minimalistic interface and fast speeds with intensive web applications. The address bar is also a search bar, so you can enter a search query or web address and quickly get the information you need. With version 4 you can add a growing number of extensions, personalize it with a variety of stylish themes, and automatically translate foreign websites into your own language. Opera 10.50 Although Opera has been around for over a decade, relatively few users have used it. With the new 10.50 release, Opera has many unique features packed in a sleek UI. It integrates great with Aero and the Windows 7 taskbar, and lets you preview the contents of your websites in the tab bar. It also includes Opera Unite, a small personal web server to make file sharing easy, Opera Turbo to speed up your internet when the connection is slow, and Opera Link to keep all your copies of Opera in sync. It’s a popular browser on many mobile devices, and version 10.50 has a lot of enhancements. Apple Safari 4 Safari is the default browser in Mac OS X, and starting with version 3 it has been available for Windows as well. It’s based on Webkit, the popular new rendering engine that provides great speed and standards compatibility.  Safari 4 lets you browse your browsing history in a unique Coverflow interface, and shows your Top Sites in a fancy, 3D interface.  It’s also great for viewing mobile websites for the iPhone and other mobile devices through Developer Tools. Flock 2.5 Based on the popular Firefox core, Flock brings a multitude of social features to your browsing experience. You can view the latest YouTube videos, Flickr pictures, update your favorite social network, and keep up with your webmail thanks to It’s integration with a wide variety of services. You can even post to your blog through the integrated blog editor. If your time online is mostly spent in social services, this may be a browser you want to check out. Maxthon 2.5 Maxthon is a unique browser that builds on Internet Explorer to bring more features with IE’s rendering. Formerly known as MyIE2, Maxthon was popular for bringing tabbed browsing with IE rendering during the days of IE 6.  Today Maxthon supports a wide range of plugins and skins, so you can customize it however you want. It includes mouse gestures, a web accelerator to speed up pokey internet connections, a content blocker to remove unwanted content from sites, an online account to backup your favorites, and a nice download manager. Avant Browser Another nice browser based on Internet Explorer, Avant brings a wide variety of features in a nice brushed-metal interface. It includes an integrated AutoFill for forms, mouse gestures, customizable skins, and privacy protection features. It also includes a Flash blocker that will only load flash in webpages when you select them. You can also integrate Avant with an online account to store your bookmarks, feeds, settings and passwords online. Sleipnir Sleipnir is a customizable browser meant for advance users that is quite popular in Japan. It’s built on the Trident engine and virtually every aspect of is customizable unlike Internet Explorer.   FlashPeak SlimBrowser SlimBrowser from FlashPeak incorporates a lot of features like Popup Killer, Auto Login, site filtering and more. It’s based on Internet Explorer but offers a lot more customizable options out of the box.   K-meleon This basic browser is light on system resources and based on the Gecko engine. It’s been in development for years on SourceForge, and if you like to tweak virtually any aspect of your browser, this might be a good choice for you.   GreenBrowser GreenBrowser is based on Internet Explorer and is available in several languages. It has a large amount of features out of the box and is light on system resources.   Conclusion The European Union asked for more choices in the web browser they could choose from when installing Windows, and with the Browser Ballot Screen, they certainly get a variety to choose from.  If you’ve tried out some of the lesser known browsers, or think some important ones have been left out, leave a comment and tell us about it. Learn More About the Browser Ballot Screen and Download Alternatives to IE Windows Update KB976002 Similar Articles Productive Geek Tips Set the Default Browser on Ubuntu From the Command LineQuick Tip: Empty Internet Explorer 7 Cache when Browser is ClosedView Hidden Files and Folders in Ubuntu File BrowserSet the Default Browser and Email Client in UbuntuAccess Multiple Browsers from Firefox with Browser View Plus TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • Make your CHM Help Files show HTML5 and CSS3 content

    - by Rick Strahl
    The HTML Help 1.0 specification aka CHM files, is pretty old. In fact, it's practically ancient as it was introduced in 1997 when Internet Explorer 4 was introduced. Html Help 1.0 is basically a completely HTML based Help system that uses a Help Viewer that internally uses Internet Explorer to render the HTML Help content. Because of its use of the Internet Explorer shell for rendering there were many security issues in the past, which resulted in locking down of the Web Browser control in Windows and also the Help Engine which caused some unfortunate side effects. Even so, CHM continues to be a popular help format because it is very easy to produce content for it, using plain HTML and because it works with many Windows application platforms out of the box. While there have been various attempts to replace CHM help files CHM files still seem to be a popular choice for many applications to display their help systems. The biggest alternative these days is no system based help at all, but links to online documentation. For Windows apps though it's still very common to see CHM help files and there are still a ton of CHM help out there and lots of tools (including our own West Wind Html Help Builder) that produce output for CHM files as well as Web output. Image is Everything and you ain't got it! One problem with the CHM engine is that it's stuck with an ancient Internet Explorer version for rendering. For example if you have help content that uses HTML5 or CSS3 content you might have an HTML Help topic like the following shown here in a full Web Browser instance of Internet Explorer: The page clearly uses some CSS3 features like rounded corners and box shadows that are rendered using plain CSS 3 features. Note that I used Internet Explorer on purpose here to demonstrate that IE9 on Windows 7 can properly render this content using some of the new features of CSS, but the same is true for all other recent versions of the major browsers (FireFox 3.1+, Safari 4.5+, WebKit 9+ etc.). Unfortunately if you take this nice and simple CSS3 content and run it through the HTML Help compiler to produce a CHM file the resulting output on the same machine looks a bit less flashy: All the CSS3 styling is gone and although the page display and functionality still works, but all the extra styling features are gone. This even though I am running this on a Windows 7 machine that has IE9 that should be able to render these CSS features. Bummer. Web Browser Control - perpetually stuck in IE 7 Mode The problem is the Web Browser/Shell Components in Windows. This component is and has been part of Windows for as long as Internet Explorer has been around, but the Web Browser control hasn't kept up with the latest versions of IE. In a nutshell the control is stuck in IE7 rendering mode for engine compatibility reasons by default. However, there is at least one way to fix this explicitly using Registry keys on a per application basis. The key point from that blog article is that you can override the IE rendering engine for a particular executable by setting one (or more) registry flags that tell the Windows Shell which version of the Internet Explorer rendering engine to load. An application that wishes to use a more recent version of Internet Explorer can then register itself during installation for the specific IE version desired and from then on the application will use that version of the Web Browser component. If the application is older than the specified version it falls back to the default version (IE 7 rendering). Forcing CHM files to display with IE9 (or later) Rendering Knowing that we can force the IE usage for a given process it's also possible to affect the CHM rendering by setting same keys on the executable that's hosting the CHM file. What that executable file is depends on the type of application as there are a number of ways that can launch the help engine. hh.exeThe standalone Windows CHM Help Viewer that launches when you launch a CHM from Windows Explorer. You can manually add hh.exe to the registry keys. YourApplication.exeIf you're using .NET or any tool that internally uses the hhControl ActiveX control to launch help content your application is your host. You should add your application's exe to the registry during application startup. foxhhelp9.exeIf you're building a FoxPro application that uses the built-in help features, foxhhelp9.exe is used to actually host the help controls. Make sure to add this executable to the registry. What to set You can configure the Internet Explorer version used for an application in the registry by specifying the executable file name and a value that specifies the IE version desired. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit only or 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe 32 bit on 64 bit machine: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: hh.exe Note that it's best to always set both values ideally when you install your application so it works regardless of which platform you run on. The value specified is a DWORD value and the interesting values are decimal 9000 for IE9 rendering mode depending on !DOCTYPE settings or 9999 for IE 9 standards mode always. You can use the same logic for 8000 and 8888 for IE8 and the final value of 7000 for IE7 (one has to wonder what they're going todo for version 10 to perpetuate that pattern). I think 9000 is the value you'd most likely want to use. 9000 means that IE9 will be used for rendering but unless the right doctypes are used (XHTML and HTML5 specifically) IE will still fall back into quirks mode as needed. This should allow existing pages to continue to use the fallback engine while new pages that have the proper HTML doctype set can take advantage of the newest features. Here's an example of how I set the registry keys in my Tarma Installmate registry configuration: Note that I set all three values both under the Software and Wow6432Node keys so that this works regardless of where these EXEs are launched from. Even though all apps are 32 bit apps, the 64 bit (the default one shown selected) key is often used. So, now once I've set the registry key for hh.exe I can now launch my CHM help file from Explorer and see the following CSS3 IE9 rendered display: Summary It sucks that we have to go through all these hoops to get what should be natural behavior for an application to support the latest features available on a system. But it shouldn't be a surprise - the Windows Help team (if there even is such a thing) has not been known for forward looking technologies. It's a pretty big hassle that we have to resort to setting registry keys in order to get the Web Browser control and the internal CHM engine to render itself properly but at least it's possible to make it work after all. Using this technique it's possible to ship an application with a help file and allow your CHM help to display with richer CSS markup and correct rendering using the stricter and more consistent XHTML or HTML5 doctypes. If you provide both Web help and in-application help (and why not if you're building from a single source) you now can side step the issue of your customers asking: Why does my help file look so much shittier than the online help… No more!© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML5  Help  Html Help Builder  Internet Explorer  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Adaptive ADF/WebCenter template for the iPad

    - by Maiko Rocha
    One of my WebCenter Portal customers was asking about adaptive design with ADF/WebCenter Portal and how they could go about creating an adaptive iPad template for their WebCenter Portal application. They were looking not only for the out-of-the-box support for mobile Safari which is certified against PS5+ (11.1.1.6) for ADF/WebCenter - but also to create a specific template to streamline their workflow on the iPad. Seems like they wanted something in the lines of Yahoo! Mail provides for the iPad - so the example I will use is shamelessly inspired by Y! Mail's iPad UI.  But first, let's quickly understand how can we bake in some adaptive goodness into ADF Faces. First thing we need to understand is, yes, there are a couple of constraints that we will need to work around, namely, the use or layout managers and skins. Please also keep in mind that I'm not and I don't pretend to be a web designer, much less an UX specialist, so feel free to leave your thoughts on the matter in the comments section. Now, back to the limitations. Layout Managers ADF Faces layout managers create an abstraction on top of the generated HTML code for a page so a developer doesn't need to be worried about how to size and dimension the UI layout (eg, af:panelStretchLayout). Although layout managers are very helpful, in this specific situation we will need to know a little bit more of how the final HTML is being rendered so we can apply the CSS class accordingly and create transition containers where the media queries will be applied - now, if you're using 11gR2 (11.1.2.2.3) there's the new component af:panelGridLayout (here and here) that will greatly improve creating responsive templates and pages because it is based on the grid/fluid systems and will generate straight out to DIVs on your final page. For now, I'm limited to PS5 and the af:panelStretchLayout component as a starting point because that's the release my customer is on. Skins You won't be able to use media queries, or use anything with "@" notation on the skin CSS file - the skin pre-processor will remove all extraneous "@" from the CSS file. The solution is to split your CSS in two separate files: a skin CSS file and plain CSS where you will add the media queries. The issue here is that you won't be able to use media queries for any faces components. We can, though, still apply the media queries for the components like af:panelGroupLayout and af:panelBorderLayout through their styleClass property to enable these components to be responsive to to the iPad orientation, by changing its dimensions, font sizes, hide/show areas, etc. Difference between responsive and adaptive design The best definition of adaptive vs responsive web design I could find is this: “Responsive web design,” as coined by Ethan Marcotte, means “fluid grids, fluid images/media & media queries.” “Adaptive web design,” as I use it, is about creating interfaces that adapt to the user’s capabilities (in terms of both form and function). To me, “adaptive web design” is just another term for “progressive enhancement” of which responsive web design can (an often should) be an integral part, but is a more holistic approach to web design in that it also takes into account varying levels of markup, CSS, JavaScript and assistive technology support. Responsive/adapative web design is much more than slapping an HTML template with CSS around your content or application. The content and application themselves are part of your web design - in other words, a responsive template is just an afterthought if it is not originating from a responsive design the involves the whole web application/s. Tips on responsive / adapative design with ADF/WebCenter Some of the tips listed below were already mentioned in multiple blog posts about ADF layout and skinning, but it is still worth remembering: a simple guideline for ADF/WebCenter apps would be to first create a high-level group of devices, for example: smartphones, tablets,  and desktop. For each of these large groups, create the basic structure to provide responsiveness: a page template, a skin, and an external CSS: pagetemplate_smartphone.jspx, smartphone_skin.css, smartphone-responsive.css pagetemplate_tablet.jspx, tablet_skin.css, tablet-responsive.css pagetemplate_desktop.jspx, desktop_skin.css, desktop-responsive.css These three assets can be changed on the fly through an user-agent check on the server side, delivering the right UI to the right device. Within each of the assets, you can make fine adjustments for each subgroup of devices with media queries - for example, smart phones with different screen dimensions and pixel density. Having these three groups and the corresponding assets per group seem to be a good compromise between trying to put everything on a single set of assets - specially considering the constraints above - and going to the other side of the spectrum to create assets per discrete device (iPhone4, iPhone5, Nexus, S3, etc.). Keep in mind that these are my rules and are not in any shape or form a best practice - this is how it fits best for the scenarios I've been working with. If you need to use HTML tags on your page, surround them with af:group to protect the DOM structure For stretchable/fluid layouts: Use non-stretching containers: panelGroupLayout, panelBorderLayout, … panelBorderLayout can be used to approximate HTML table component To avoid multiple scroll bars, do not nest scrolling PanelGroupLayout components. Consider layout="vertical" For stretchable/fluid layouts: Most stretchable ADF components also work in flowing context with dimensionsFrom="auto" To stretch a component horizontally, use styleClass="AFStretchWidth" instead of  "width:100%" Skinning Don't use CSS3 @media, @import, animations, etc. on skin css files. They will be removed. CSS3 properties within a class (box-shadow, transition, etc.) work just fine. Consider resetting some skin classes to better control their rendering: body {color: inherit;font: inherit;} af|document {-tr-inhibit: all;} af|commandLink {-tr-inhibit: all;} af|goLink {-tr-inhibit: all;} af|inputText::content {font: inherit;} Specific meta tags and CSS properties: Use  <meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0"/> to avoid zooming (if you want) Use -webkit-overflow-scrolling: touch to enable native momentum scrolling within overflown areas (here) Use text-rendering: optmizeLegibility to improve readability. (here) User text-overflow: ellipsis to gracefully crop overflown text. (here) The meta-tags are included in each and every page in the metaContainer facet of af:document tag. You can also use a javascript to inject the meta-tags from the template. For the purpose of the example, I wanted to use as few workarounds as possible.   The iPad template and sample application This sample application has been built as a WebCenter Portal application, but you will also be able to reuse the template and techniques on your vanilla ADF application. Keep in mind that I'm neither a designer nor a CSS specialist, so please don't bash me too much on the messy CSS file you'll find on the application.  I've extended the provided PreferencesBean class that comes with WebCenter Portal and added code to dinamically change the template and skin on the fly.   This is the sample application in landscape orientation: This is the sample application in portrait orientation - the left side menu hides automatically based on a CSS media query: Another screenshot with a skinned popup opened: This is a sample application for you to play with - ideally you shouldn't use it as a starting point. On the left side bar you will find links rendered from a WebCenter Portal navigation model - the link triggers a full request through an af:goLink, while the light blue PPR button triggers a PPR navigation. The dark blue toolbar buttons at the top don't have any function,while the Approve and Reject buttons show a skinned popup. The search box of course doesn't have any behavior attahed to it either. There's a known issue right now with some PPR calls that are randomly generating a 403 error redirecting to the login page - I didn't have time to investigate if this is iOS6 specific or not - if you have any insights please let me know your findings. You can download the sample here.

    Read the article

  • WebSocket and Java EE 7 - Getting Ready for JSR 356 (TOTD #181)

    - by arungupta
    WebSocket is developed as part of HTML 5 specification and provides a bi-directional, full-duplex communication channel over a single TCP socket. It provides dramatic improvement over the traditional approaches of Polling, Long-Polling, and Streaming for two-way communication. There is no latency from establishing new TCP connections for each HTTP message. There is a WebSocket API and the WebSocket Protocol. The Protocol defines "handshake" and "framing". The handshake defines how a normal HTTP connection can be upgraded to a WebSocket connection. The framing defines wire format of the message. The design philosophy is to keep the framing minimum to avoid the overhead. Both text and binary data can be sent using the API. WebSocket may look like a competing technology to Server-Sent Events (SSE), but they are not. Here are the key differences: WebSocket can send and receive data from a client. A typical example of WebSocket is a two-player game or a chat application. Server-Sent Events can only push data data to the client. A typical example of SSE is stock ticker or news feed. With SSE, XMLHttpRequest can be used to send data to the server. For server-only updates, WebSockets has an extra overhead and programming can be unecessarily complex. SSE provides a simple and easy-to-use model that is much better suited. SSEs are sent over traditional HTTP and so no modification is required on the server-side. WebSocket require servers that understand the protocol. SSE have several features that are missing from WebSocket such as automatic reconnection, event IDs, and the ability to send arbitrary events. The client automatically tries to reconnect if the connection is closed. The default wait before trying to reconnect is 3 seconds and can be configured by including "retry: XXXX\n" header where XXXX is the milliseconds to wait before trying to reconnect. Event stream can include a unique event identifier. This allows the server to determine which events need to be fired to each client in case the connection is dropped in between. The data can span multiple lines and can be of any text format as long as EventSource message handler can process it. WebSockets provide true real-time updates, SSE can be configured to provide close to real-time by setting appropriate timeouts. OK, so all excited about WebSocket ? Want to convert your POJOs into WebSockets endpoint ? websocket-sdk and GlassFish 4.0 is here to help! The complete source code shown in this project can be downloaded here. On the server-side, the WebSocket SDK converts a POJO into a WebSocket endpoint using simple annotations. Here is how a WebSocket endpoint will look like: @WebSocket(path="/echo")public class EchoBean { @WebSocketMessage public String echo(String message) { return message + " (from your server)"; }} In this code "@WebSocket" is a class-level annotation that declares a POJO to accept WebSocket messages. The path at which the messages are accepted is specified in this annotation. "@WebSocketMessage" indicates the Java method that is invoked when the endpoint receives a message. This method implementation echoes the received message concatenated with an additional string. The client-side HTML page looks like <div style="text-align: center;"> <form action=""> <input onclick="send_echo()" value="Press me" type="button"> <input id="textID" name="message" value="Hello WebSocket!" type="text"><br> </form></div><div id="output"></div> WebSocket allows a full-duplex communication. So the client, a browser in this case, can send a message to a server, a WebSocket endpoint in this case. And the server can send a message to the client at the same time. This is unlike HTTP which follows a "request" followed by a "response". In this code, the "send_echo" method in the JavaScript is invoked on the button click. There is also a <div> placeholder to display the response from the WebSocket endpoint. The JavaScript looks like: <script language="javascript" type="text/javascript"> var wsUri = "ws://localhost:8080/websockets/echo"; var websocket = new WebSocket(wsUri); websocket.onopen = function(evt) { onOpen(evt) }; websocket.onmessage = function(evt) { onMessage(evt) }; websocket.onerror = function(evt) { onError(evt) }; function init() { output = document.getElementById("output"); } function send_echo() { websocket.send(textID.value); writeToScreen("SENT: " + textID.value); } function onOpen(evt) { writeToScreen("CONNECTED"); } function onMessage(evt) { writeToScreen("RECEIVED: " + evt.data); } function onError(evt) { writeToScreen('<span style="color: red;">ERROR:</span> ' + evt.data); } function writeToScreen(message) { var pre = document.createElement("p"); pre.style.wordWrap = "break-word"; pre.innerHTML = message; output.appendChild(pre); } window.addEventListener("load", init, false);</script> In this code The URI to connect to on the server side is of the format ws://<HOST>:<PORT>/websockets/<PATH> "ws" is a new URI scheme introduced by the WebSocket protocol. <PATH> is the path on the endpoint where the WebSocket messages are accepted. In our case, it is ws://localhost:8080/websockets/echo WEBSOCKET_SDK-1 will ensure that context root is included in the URI as well. WebSocket is created as a global object so that the connection is created only once. This object establishes a connection with the given host, port and the path at which the endpoint is listening. The WebSocket API defines several callbacks that can be registered on specific events. The "onopen", "onmessage", and "onerror" callbacks are registered in this case. The callbacks print a message on the browser indicating which one is called and additionally also prints the data sent/received. On the button click, the WebSocket object is used to transmit text data to the endpoint. Binary data can be sent as one blob or using buffering. The HTTP request headers sent for the WebSocket call are: GET ws://localhost:8080/websockets/echo HTTP/1.1Origin: http://localhost:8080Connection: UpgradeSec-WebSocket-Extensions: x-webkit-deflate-frameHost: localhost:8080Sec-WebSocket-Key: mDbnYkAUi0b5Rnal9/cMvQ==Upgrade: websocketSec-WebSocket-Version: 13 And the response headers received are Connection:UpgradeSec-WebSocket-Accept:q4nmgFl/lEtU2ocyKZ64dtQvx10=Upgrade:websocket(Challenge Response):00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 The headers are shown in Chrome as shown below: The complete source code shown in this project can be downloaded here. The builds from websocket-sdk are integrated in GlassFish 4.0 builds. Would you like to live on the bleeding edge ? Then follow the instructions below to check out the workspace and install the latest SDK: Check out the source code svn checkout https://svn.java.net/svn/websocket-sdk~source-code-repository Build and install the trunk in your local repository as: mvn install Copy "./bundles/websocket-osgi/target/websocket-osgi-0.3-SNAPSHOT.jar" to "glassfish3/glassfish/modules/websocket-osgi.jar" in your GlassFish 4 latest promoted build. Notice, you need to overwrite the JAR file. Anybody interested in building a cool application using WebSocket and get it running on GlassFish ? :-) This work will also feed into JSR 356 - Java API for WebSocket. On a lighter side, there seems to be less agreement on the name. Here are some of the options that are prevalent: WebSocket (W3C API, the URL is www.w3.org/TR/websockets though) Web Socket (HTML5 Demos - html5demos.com/web-socket) Websocket (Jenkins Plugin - wiki.jenkins-ci.org/display/JENKINS/Websocket%2BPlugin) WebSockets (Used by Mozilla - developer.mozilla.org/en/WebSockets, but use WebSocket as well) Web sockets (HTML5 Working Group - www.whatwg.org/specs/web-apps/current-work/multipage/network.html) Web Sockets (Chrome Blog - blog.chromium.org/2009/12/web-sockets-now-available-in-google.html) I prefer "WebSocket" as that seems to be most common usage and used by the W3C API as well. What do you use ?

    Read the article

  • At the Java DEMOgrounds - JavaFX

    - by Janice J. Heiss
    JavaFX has made rapid progress in the last year, as is evidenced by the wealth of demos on display. A few questions appear to be prominent in the minds of JavaFX enthusiasts. Here are some questions with answers provided by Oracle’s JavaFX team.When will the rest of the JavaFX code be available in open source?Oracle has started to open source JavaFX. The existing platform code will finish being committed to OpenJFX by the end of the year.Why should I use JavaFX instead of HTML5?We see JavaFX as complementary to HTML5, and most companies we talk to react positively once they understand how they can benefit from a hybrid solution. As most HTML5 developers will tell you, the biggest obstacle to deploying HTML5 applications is fragmentation. JavaFX offers a convenient way to render HTML and JavaScript within its WebView component, which provides the same level of quality and features across Windows, Mac, and Linux. Additionally, JavaScript in WebView can make calls into the Java code, and vice versa, allowing developers to tap into the best of both worlds.What is the market penetration of JavaFX? It is currently limited, as we've just made available JavaFX on Mac and Linux in August, but we expect JavaFX to be present on millions of desktop-type systems now that JavaFX is included as part of the JRE. We have also significantly lowered the level of effort required to deploy an application bundling the JRE and JavaFX runtime libraries. Finally, we are seeing a lot of interest by companies operating in the embedded market, who have found it hard to develop compelling UIs with existing technologies.Below are summaries of JavaFX Demos on display at JavaOne 2012:JavaFX EnsembleEnsemble is a collection of over 100 JavaFX samples packaged as a JavaFX application. This demo is especially useful to those new to JavaFX, or those not familiar with its latest features (e.g. canvas, color picker). Ensemble is the reference for getting familiar with JavaFX functionality. Each sample can be run from within Ensemble, and the API for each sample, as well as the source code are available alongside the sample.The samples source code can be saved as a NetBeans project for convenience purposes, or can be copied as is in any other Java IDE. The version of Ensemble shown is packaged as a native Windows application, including the JRE and JavaFX libraries. It was created with the JavaFX packager, which provides multiple packaging options, and frees developers from the cumbersome and error-prone process of packaging a Java application.FX Experience ToolsFX Experience Tools is a JavaFX application that provides different utilities to create new skins for your JavaFX applications. One of the most powerful features of JavaFX is the ability to skin applications via CSS. Since not all Java developers are familiar with CSS, these utilities are a great starting point to create custom skins. JavaFX allows developers to easily customize the look and feel of their applications through CSS. FX Experience Tools makes it easy to create new themes for JavaFX applications, even if you are not familiar with CSS. FX Experience Tools is a JavaFX application packaged as a native application including the JRE and JavaFX runtime libraries. FX Experience tools shows how this type of deployment simplifies the packaging of Java applications without requiring developers to master the intricacies of Java application packaging. The download site for FX Experience Tools is http://fxexperience.com/2012/03/announcing-fx-experience-tools/ JavaFX Scene BuilderJavaFX Scene Builder is a visual layout tool that lets users quickly design the UI of your JavaFX application, without coding. Users can drag and drop UI components, modify their properties, apply style sheets, and the FXML code they create for the layout is automatically generated in the background. The result is an FXML file that can then be combined with a Java project by binding the UI to the application’s logic. Developers can easily create user interfaces for their application, as well as separate the application’s UI from the application logic for easier maintenance. Attendees can get this app by going to javafx.com and checking the link at top of the “Overview” page.Scene Builder allows developers to easily layout JavaFX UI controls, charts, shapes, and containers, so that you can quickly prototype user interfaces. It generates FXML, an XML-based markup language that enables users to define an application’s user interface, separately from the application logic. Scene Builder can be used in combination with any Java IDE, but is more tightly integrated with NetBeans IDE. It is written as a JavaFX application, with native desktop integration on Windows and Mac OS X. It’s a perfect example of a JavaFX application packages as a native application.Scene Builder is available for your preferred development platform. Besides the GA release on Windows and Mac, a Developer Preview of Scene Builder for Linux has just been made available.Scenic ViewScenic View is a tool that can be used to understand the current state of your application UI, and to also easily manipulate properties of the scenegraph without having to keep editing your code. Creating UIs is a complex process, and it can be hard and tedious detecting these issues, editing the code, and then compiling it to test the app again. Scenic View is a great diagnostics tool that helps developers identify these issues and correct them at runtime.Attendees can get Scenic View by going to javafx.com, selecting the “Community” tab, and clicking the link under the “Third Party Tools and Utilities” section.Scenic View allows developers to easily examine the state of a JavaFX application scenegraph while the application is running. Some of the latest features added to Scenic View include event monitoring, javadoc browsing, and contextual menus. The download site for Scenic View is available here: http://fxexperience.com/scenic-view/ Conference TourConference Tour is an application that lets users discover some of the major Java conferences throughout the world. The Conference Tour application shows how simple it is to mix JavaFX and HTML5 into a single, interactive application. Attendees get Conference Tour here.JavaFX includes a Web engine based on Webkit that provides a consistent web interface to render HTML5 across operating systems, within a JavaFX application. JavaFX features a bi-directional bridge that allows Java APIs to call JavaScript within WebView, or allows JavaScript to make calls to Java APIs. This allows developers to leverage the best of both worlds.Java EE developers can take advantage of WebView and the JavaScript-Java bridge to allow their HTML clients to seamlessly bypass Web browser’s sandbox to access native system resources, providing a richer user experience.FXMediaPlayerFXMediaPlayer is an application that lets developers check different media functionality in JavaFX, such as synthesizer or support for HTTP Live Streaming (HLS). This demo shows how developers can embed video content in their Java applications. JavaFX leverages the underlying video (e.g., H.264) and audio (e.g., AAC) codecs on the user’s computer. JavaFX APIs allow developers to interact with the video content (e.g. play/pause, or programmable markers). Some of the latest media features introduced in JavaFX 2.2 include HTTP Live Streaming (HLS). Obviously there is a lot for JavaFX enthusiasts to chew on!

    Read the article

  • facebook application using iframe on Facebook Developer Toolkit 3.0

    - by adveb
    hey i am trying to build facebook iframe application using the Facebook Developer Toolkit 3.01 asp.net c#. i am working by the ifrmae sample of the toolkit can be download here. www.facebooktoolkit.codeplex.com/releases/view/39727 this is my facebook application that is the same as the iframe sample. http://apps.facebook.com/alefbet/ this is my code, it has 2 pages, master page and default. this 2 pages are the same as the iframe sample. 1) this is the master page. public partial class IFrameMaster : Facebook.Web.CanvasIFrameMasterPage { public IFrameMaster() { RequireLogin = true; } } 2) this is the default.aspx public partial class Default : System.Web.UI.Page { private const string SCRIPT_BLOCK_NAME = "dynamicScript"; protected void Page_Load(object sender, EventArgs e) { if (IsPostBack) { if (Master.Api.Users.HasAppPermission(Enums.ExtendedPermissions.email)) { SendThankYouEmail(); } Response.Redirect("ThankYou.aspx"); } else { if (Master.Api.Users.HasAppPermission(Enums.ExtendedPermissions.email)) { emailPermissionPanel.Visible = false; } CreateScript(); } } private void SendThankYouEmail() { var subject = "Thank you for telling us your favorite color"; var body = "Thank you for telling us what your favorite color is. We hope you have enjoyed using this application. Encourage your friends to tell us their favorite color as well!"; this.Master.Api.Notifications.SendEmail(this.Master.Api.Session.UserId.ToString(), subject, body, string.Empty); } private void CreateScript() { var saveColorScript = @" function saveColor(color) { document.getElementById('" + colorInput.ClientID + @"').value = color; } function submitForm() { document.getElementById('" + form.ClientID + @"').submit(); } "; if (!ClientScript.IsClientScriptBlockRegistered(SCRIPT_BLOCK_NAME)) { ClientScript.RegisterClientScriptBlock(this.GetType(), SCRIPT_BLOCK_NAME, saveColorScript); } } } my directory structure is 1)the master page is in the root. 2)the default.aspx is in the root/alfbet directory. 3)i have also have the xd_receiver.htm inside root/channel directory. that inside the master page their is the folowing line: <script type="text/javascript"> FB_RequireFeatures(["XFBML"], function() { FB.Facebook.init("c81f17ee4d4ffc5113c55f8b99fdcab5", "channel/xd_receiver.htm"); }); </script> the problem is that the applicatin dosent work apps.facebook.com/alefbet/default.aspx why it dosent work ? please help me and others who also obstacle in this issue. i tryied lots of things, one of them was to display the user id. for that i put label in the default.aspx and wrote lblTest.Text = Master.Api.Users.GetInfo().uid.ToString(); and it dosent event get to this line. i know it because it keeps display in the label.text the word "label" thank you very much.

    Read the article

  • Problem with FedEx Address validation web service

    - by DJ Matthews
    Hi, I'm trying to get started with Fedex'es Address validation service and I'm running into a road block with FedEx's own demo application. This is the code in there app: Sub Main() ''# Build a AddressValidationRequest object Dim request As AddressValidationRequest = New AddressValidationRequest() Console.WriteLine("--- Setting Credentials ---") request.WebAuthenticationDetail = New WebAuthenticationDetail() request.WebAuthenticationDetail.UserCredential = New WebAuthenticationCredential() request.WebAuthenticationDetail.UserCredential.Key = "###" ''# Replace "XXX" with the Key request.WebAuthenticationDetail.UserCredential.Password = "###" ''# Replace "XXX" with the Password Console.WriteLine("--- Setting Account Information ---") request.ClientDetail = New ClientDetail() request.ClientDetail.AccountNumber = "###" ''# Replace "XXX" with clients account number request.ClientDetail.MeterNumber = "###" ''# Replace "XXX" with clients meter number request.TransactionDetail = New TransactionDetail() request.TransactionDetail.CustomerTransactionId = "Address Validation v2 Request using VB.NET Sample Code" ''# This is just an echo back request.Version = New VersionId() request.RequestTimestamp = DateTime.Now Console.WriteLine("--- Setting Validation Options ---") request.Options = New AddressValidationOptions() request.Options.CheckResidentialStatus = True request.Options.MaximumNumberOfMatches = 5 request.Options.StreetAccuracy = AddressValidationAccuracyType.LOOSE request.Options.DirectionalAccuracy = AddressValidationAccuracyType.LOOSE request.Options.CompanyNameAccuracy = AddressValidationAccuracyType.LOOSE request.Options.ConvertToUpperCase = True request.Options.RecognizeAlternateCityNames = True request.Options.ReturnParsedElements = True Console.WriteLine("--- Address 1 ---") request.AddressesToValidate = New AddressToValidate(1) {New AddressToValidate(), New AddressToValidate()} request.AddressesToValidate(0).AddressId = "WTC" request.AddressesToValidate(0).Address = New Address() request.AddressesToValidate(0).Address.StreetLines = New String(0) {"10 FedEx Parkway"} request.AddressesToValidate(0).Address.PostalCode = "38017" request.AddressesToValidate(0).CompanyName = "FedEx Services" Console.WriteLine("--- Address 2 ---") request.AddressesToValidate(1).AddressId = "Kinkos" request.AddressesToValidate(1).Address = New Address() request.AddressesToValidate(1).Address.StreetLines = New String(0) {"50 N Front St"} request.AddressesToValidate(1).Address.PostalCode = "38103" request.AddressesToValidate(1).CompanyName = "FedEx Kinkos" Dim addressValidationService As AddressValidationService.AddressValidationService = New AddressValidationService.AddressValidationService ''# Try ''# This is the call to the web service passing in a AddressValidationRequest and returning a AddressValidationReply Console.WriteLine("--- Sending Request..... ---") Dim reply As New AddressValidationReply() reply = addressValidationService.addressValidation(request) Console.WriteLine("--- Processing request.... ---") ''#This is where I get the error If (Not reply.HighestSeverity = NotificationSeverityType.ERROR) And (Not reply.HighestSeverity = NotificationSeverityType.FAILURE) Then If (Not reply.AddressResults Is Nothing) Then For Each result As AddressValidationResult In reply.AddressResults Console.WriteLine("Address Id - " + result.AddressId) Console.WriteLine("--- Proposed Details ---") If (Not result.ProposedAddressDetails Is Nothing) Then For Each detail As ProposedAddressDetail In result.ProposedAddressDetails Console.WriteLine("Score - " + detail.Score) Console.WriteLine("Address - " + detail.Address.StreetLines(0)) Console.WriteLine(" " + detail.Address.StateOrProvinceCode + " " + detail.Address.PostalCode + " " + detail.Address.CountryCode) Console.WriteLine("Changes -") For Each change As AddressValidationChangeType In detail.Changes Console.WriteLine(change.ToString()) Next Console.WriteLine("") Next End If Console.WriteLine("") Next End If Else For Each notification As Notification In reply.Notifications Console.WriteLine(notification.Message) Next End If Catch e As SoapException Console.WriteLine(e.Detail.InnerText) Catch e As Exception Console.WriteLine(e.Message) End Try Console.WriteLine("Press any key to quit !") Console.ReadKey() End Sub It seems to send the request object to the web service, but the"reply" object is returned with "Nothing". I could understand if I wrote the code, but good god... they can't even get their own code to work? Has anyone else seen/fixed this problem?

    Read the article

  • How can we implement change notification propagation for WPF and SL in the MVVM pattern?

    - by Firoso
    Here's an example scenario targetting MVVM WPF/SL development: View data binds to view model Property "Target" "Target" exposes a field of an object called "data" that exists in the local application model, called "Original" when "Original" changes, it should raise notification to the view model and then propogate that change notification to the View. Here are the solutions I've come up with, but I don't like any of them all that much. I'm looking for other ideas, by the time we come up with something rock solid I'm certain Microsoft will have released .NET 5 with WPF/SL extensions for better tools for MVVM development. For now the question is, "What have you done to solve this problem and how has it worked out for you?" Option 1. Proposal: Attach a handler to data's PropertyChanged event that watches for string values of properties it cares about that might have changed, and raises the appropriate notification. Why I don't like it: Changes don't bubble naturally, objects must be explicitly watched, if data changes to a new source, events must be un-registered/registered. Why I kind of like it: I get explicit control over propogation of changes, and I don't have to use any types that belong at a higher level of the application such as dependancy properties. Option 2. Proposal: Attach a handler to data's PropertyChanged event that re-raises the event across all properties using the name property name. Why I don't like it: This is essentially the same as option 1, but less intelligent, and forces me to never change my property names, as they have to be the same as the property names on data Why I kind of like it: It's very easy to set up and I don't have to think about it... Then again if I try to think, and change names to things that make sense, I shoot myself in the foot, and then I have to think about it! Option 3. Proposal: Inherit my view model from dependancy object, and notify binding sources of changes directly. Why I don't like it: I'm not even 100% sure dependancy properties/objects can DO this, it was just a thought to look into. Also I don't personally feel that WPF/SL types like Dep Obj belong at the view model level. Why I kind of like it: IF it has the capability that I'm seeking then it's a good answer! minus that pesky layering issue. Option 4. Proposal: Use a consistant agent messaging system based off of Task Parallels DataFlow Library to propogate everything through linked pipelining. Why I don't like it: I've never tried this, and somehow I think it will be lacking, plus it requires me to think about my code completely differently all the way around. Why I kind of like it: It has the possiblity of allowing me to do some VERY fun manipulations with a push based data model and using ActionBlocks as validation AND setters to then privately change view model properties and explicitly control PropertyChanged notifications.

    Read the article

  • Good Email Notification Sending Service

    - by Philibert Perusse
    I need to send a few but important email notifications to individual users. For instance, when they register their software I send them a confirmation email. Right now, I am using 'sendmail' from my Perl CGI script to do the job. Most of my automated email are lost or marked as junk. Unfortunately, I am using shared hosting services and not a very good control over the SPF and SenderID DNS records. Even more bad, some other user of that shared server has been infected with some kind of SPAM-BOT and the IP is now blacklisted until further notice! Anyway I just don't want to deal with this kind of headache. I am looking for an online service that I will be able to subscribe to and pay something like 0.10$ per email I send with no monthly fees. I just need and API to be able to send the email from PHP or Perl code I will have to write. I have been looking around at all those "Email Sending Services" and they are all wrapped around creating campains and managing lists for bulk email marketing distribution and newsletters. But remember, I want to send an email notification to a "single" recipient. So far, I have look at MailChimp, SocketLabs, iContact, ConstantContact, StreamSend and so many others to no avail. I have seen one comment at Hackers News saying that MailChimp have an API for transactional e-mails (i.e. ad-hoc ones to welcome a user for example). So you're not just restricted to using them for bulk emails But I cannot find this in the API documentation supplied, maybe this was removed. Any suggestions out there. Here is a summary of my requirements: Allows ad hoc sending of email to a single recipient. Throughput may well be throttle I don't care, i am sending like 2-5 emails a day. API available in PHP or Perl to connect to that web service. Ideally I can send HTML formatted emails, otherwise I will live with text only. Solution not too expensive, between 0.01$ and 0.25$ per email would be acceptable. No recurring monthly fees.

    Read the article

  • C# DirectSound - Capture buffers not continuous

    - by Wizche
    Hi, I'm trying to capture raw data from my line-in using DirectSound. My problem is that, from a buffer to another the data are just inconsistent, if for example I capture a sine I see a jump from my last buffer and the new one. To detected this I use a graph widget to draw the first 500 elements of the last buffer and the 500 elements from the new one: Snapshot I initialized my buffer this way: format = new WaveFormat { SamplesPerSecond = 44100, BitsPerSample = (short)bitpersample, Channels = (short)channels, FormatTag = WaveFormatTag.Pcm }; format.BlockAlign = (short)(format.Channels * (format.BitsPerSample / 8)); format.AverageBytesPerSecond = format.SamplesPerSecond * format.BlockAlign; _dwNotifySize = Math.Max(4096, format.AverageBytesPerSecond / 8); _dwNotifySize -= _dwNotifySize % format.BlockAlign; _dwCaptureBufferSize = NUM_BUFFERS * _dwNotifySize; // my capture buffer _dwOutputBufferSize = NUM_BUFFERS * _dwNotifySize / channels; // my output buffer I set my notifications one at half the buffer and one at the end: _resetEvent = new AutoResetEvent(false); _notify = new Notify(_dwCapBuffer); bpn1 = new BufferPositionNotify(); bpn1.Offset = ((_dwCapBuffer.Caps.BufferBytes) / 2) - 1; bpn1.EventNotifyHandle = _resetEvent.SafeWaitHandle.DangerousGetHandle(); bpn2 = new BufferPositionNotify(); bpn2.Offset = (_dwCapBuffer.Caps.BufferBytes) - 1; bpn2.EventNotifyHandle = _resetEvent.SafeWaitHandle.DangerousGetHandle(); _notify.SetNotificationPositions(new BufferPositionNotify[] { bpn1, bpn2 }); observer.updateSamplerStatus("Events listener initialization complete!\r\n"); And here is how I process the events. /* Process thread */ private void eventReceived() { int offset = 0; _dwCaptureThread = new Thread((ThreadStart)delegate { _dwCapBuffer.Start(true); while (isReady) { _resetEvent.WaitOne(); // Notification received /* Read the captured buffer */ Array read = _dwCapBuffer.Read(offset, typeof(short), LockFlag.None, _dwOutputBufferSize - 1); observer.updateTextPacket("Buffer: " + count.ToString() + " # " + read.GetValue(read.Length - 1).ToString() + " # " + read.GetValue(0).ToString() + "\r\n"); /* Print last/new part of the buffer to the debug graph */ short[] graphData = new short[1001]; Array.Copy(read, graphData, 1000); db.SetBufferDebug(graphData, 500); observer.updateGraph(db.getBufferDebug()); offset = (offset + _dwOutputBufferSize) % _dwCaptureBufferSize; /* Out buffer not used */ /*_dwDevBuffer.Write(0, read, LockFlag.EntireBuffer); _dwDevBuffer.SetCurrentPosition(0); _dwDevBuffer.Play(0, BufferPlayFlags.Default);*/ } _dwCapBuffer.Stop(); }); _dwCaptureThread.Start(); } Any advise? I'm sure I'm failing somewhere in the event processing, but I cant find where. I had developed the same application using the WaveIn API and it worked well. Thanks a lot...

    Read the article

  • Perl regex matching output from `w -hs` command

    - by Bushman
    I'm trying to write a Perl script that will work better with KDE's kwrited, which, as far as I can tell, is connected to a pts and puts every line it receives through the KDE system tray notifications, with the title "KDE write daemon". Unfortunately, it makes a separate notification for each and every line, so it spams up the system tray with multiline messages on regular old write, and for some reason it cuts off the entire last line of the message when using wall (One-line messages are also goners.). I was also hoping to make it so that it could broadcast across a LAN with thick clients. Before starting on that (which would require ssh, of course), I tried to make an ssh-less version to make sure it works. Unfortunately, it doesn't. perl ./write.pl "Testing 1 2 3" where the following is the contents of ./write.pl: #!/usr/bin/perl use strict; use warnings; my $message = ""; my $device = ""; my $possibledevice = '`w -hs | grep "/usr/bin/kwrited"`'; #Where is kwrited? $possibledevice =~ s/^[^\t][\t]//; $possibledevice =~ s/[\t][^\t][\t ]\/usr\/bin\/kwrited$//; $possibledevice = '/dev/'.$possibledevice; unless ($possibledevice eq "") { $device = $possibledevice; } if ($ARGV[0] ne "") { $message = $ARGV[0]; $device = $ARGV[1]; } else { $device = $ARGV[0] unless $ARGV[0] eq ""; while (<STDIN>) { chomp; $message .= <STDIN>; } } if ($message ne "") { system "echo \'$message\' > $device"; } else { print "Error: empty message" } produces the following error: $ perl write.pl "Testing 1 2 3" Use of uninitialized value $device in concatenation (.) or string at write.pl line 29. sh: -c: line 0: syntax error near unexpected token `newline' sh: -c: line 0: `echo 'foo' > ' Somehow, the regular expressions and/or the backtick escape in processing $possibledevice are not working properly, because where kwrited is connected to /dev/pts/0, the following works perfectly: $ perl write.pl "Testing 1 2 3" /dev/pts/0

    Read the article

  • What to do when you need more verbs in REST

    - by Richard Levasseur
    There is another similar question to mine, but the discussion veered away from the problem I'm encounting. Say I have a system that deals with expense reports (ER). You can create and edit them, add attachments, and approve/reject them. An expense report might look like this: GET /er/1 => {"title": "Trip to NY", "totalcost": "400 USD", "comments": [ "john: Please add the total cost", "mike: done, can you approve it now?" ], "approvals": [ {"john": "Pending"}, {"finance-group": "Pending"}] } That looks fine, right? Thats what an expense report document looks like. If you want to update it, you can do this: POST /er/1 {"title": "Trip to NY 2010"} If you want to approve it, you can do this: POST /er/1/approval {"approved": true} But, what if you want to update the report and approve it at the same time? How do we do that? If you only wanted to approve, then doing a POST to something like /er/1/approval makes sense. We could put a flag in the URL, POST /er/1?approve=1, and send the data changes as the body, but that flag doesn't seem RESTful. We could put special field to be submitted, too, but that seems a bit hacky, too. If we did that, then why not send up data with attributes like set_title or add_to_cost? We could create a new resource for updating and approving, but (1) I can't think of how to name it without verbs, and (2) it doesn't seem right to name a resource based on what actions can be done to it (what happens if we add more actions?) We could have an X-Approve: True|False header, but headers seem like the wrong tool for the job. It'd also be difficult to get set headers without using javascript in a browser. We could use a custom media-type, application/approve+yes, but that seems no better than creating a new resource. We could create a temporary "batch operations" url, /er/1/batch/A. The client then sends multiple requests, perhaps POST /er/1/batch/A to update, then POST /er/1/batch/A/approval to approve, then POST /er/1/batch/A/status to end the batch. On the backend, the server queues up all the batch requests somewhere, then processes them in the same backend-transaction when it receives the "end batch processing" request. The downside with this is, obviously, that it introduces a lot of complexity. So, what is a good, general way to solve the problem of performing multiple actions in a single request? General because its easy to imagine additional actions that might be done in the same request: Suppress or send notifications (to email, chat, another system, whatever) Override some validation (maximum cost, names of dinner attendees) Trigger backend workflow that doesn't have a representation in the document.

    Read the article

  • Reduce Multiple Errors logging in sysssislog

    - by Akshay
    Need help. I am trying to automate error notifications to be sent in mailers. For that I am querying the sysssislog table. I have pasted an "Execute SQl task" on the package event handler "On error". For testing purpose, I am deliberately trying to load duplicate keys in a table which consists of a Primary key column(so as to get an error). But instead of having just one error, "Violation of primary key constraint", SSIS records 3 in the table. PFA the screenshot as well. How can i restrict the tool to log only one error and not multiple ??? Package Structure. Package ("On error Event handler") - DFT - Oledb Source - Oledb Destination SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80004005 Description: "The statement has been terminated.". An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80004005 Description: "Violation of PRIMARY KEY constraint 'PK_SalesPerson_SalesPersonID'. Cannot insert duplicate key in object 'dbo.SalesPerson'.". SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "input "OLE DB Destination Input" (56)" failed because error code 0xC020907B occurred, and the error row disposition on "input "OLE DB Destination Input" (56)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure. SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "OLE DB Destination" (43) failed with error code 0xC0209029 while processing input "OLE DB Destination Input" (56). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure. Please guide me. Your help is very much appreciated. Thanks

    Read the article

  • Booking form works but does not give feedback

    - by Naim
    I have a form where people can book seats for a given event. It's actually a WordPress plugin. When someone sends his booking, email notifications are sent properly and the event booking status is updated accordingly. The issue is that the booking form does not give any feedback whatsoever. Once you hit "book now", the graphic keeps loading and no feedback is ever displayed like for instance "booking sent successfully" or "there are errors in your form", even though the booking is saved correctly with both sides notified by email. I guess it's a JavaScript issue,Firebug console is not showing any error... So here is the code source of the booking form : Also please experience the issue live Here username: tester password: testsa Thanks! $('#em-booking-form').submit( function(e){ e.preventDefault(); var em_booking_doing_ajax = false; $.ajax({ url: EM.bookingajaxurl, data:$('#em-booking-form').serializeArray(), dataType: 'jsonp', type:'post', beforeSend: function(formData, jqForm, options) { if(em_booking_doing_ajax){ alert(EM.bookingInProgress); return false; } em_booking_doing_ajax = true; $('.em-booking-message').remove(); $('#em-booking').append('<div id="em-loading"></div>'); }, success : function(response, statusText, xhr, $form) { $('#em-loading').remove(); $('.em-booking-message').remove(); $('.em-booking-message').remove(); //show error or success message if(response.result){ $('<div class="em-booking-message-success em-booking-message">'+response.message+'</div>').insertBefore('#em-booking-form'); $('#em-booking-form').hide(); $('.em-booking-login').hide(); $(document).trigger('em_booking_success', [response]); }else{ if( response.errors != null ){ if( $.isArray(response.errors) && response.errors.length > 0 ){ var error_msg; response.errors.each(function(i, el){ error_msg = error_msg + el; }); $('<div class="em-booking-message-error em-booking-message">'+error_msg.errors+'</div>').insertBefore('#em-booking-form'); }else{ $('<div class="em-booking-message-error em-booking-message">'+response.errors+'</div>').insertBefore('#em-booking-form'); } }else{ $('<div class="em-booking-message-error em-booking-message">'+response.message+'</div>').insertBefore('#em-booking-form'); } } $('html, body').animate({ scrollTop: $("#em-booking").first().offset().top - 50 }); //sends user back to top of form //run extra actions after showing the message here if( response.gateway != null ){ $(document).trigger('em_booking_gateway_add_'+response.gateway, [response]); } if( !response.result && typeof Recaptcha != 'undefined'){ Recaptcha.reload(); } }, complete : function(){ em_booking_doing_ajax = false; $('#em-loading').remove(); } }); return false; });

    Read the article

  • What's the standard algorithm for syncing two lists of objects?

    - by Oliver Giesen
    I'm pretty sure this must be in some kind of text book (or more likely in all of them) but I seem to be using the wrong keywords to search for it... :( A common task I'm facing while programming is that I am dealing with lists of objects from different sources which I need to keep in sync somehow. Typically there's some sort of "master list" e.g. returned by some external API and then a list of objects I create myself each of which corresponds to an object in the master list. Sometimes the nature of the external API will not allow me to do a live sync: For instance the external list might not implement notifications about items being added or removed or it might notify me but not give me a reference to the actual item that was added or removed. Furthermore, refreshing the external list might return a completely new set of instances even though they still represent the same information so simply storing references to the external objects might also not always be feasible. Another characteristic of the problem is that both lists cannot be sorted in any meaningful way. You should also assume that initializing new objects in the "slave list" is expensive, i.e. simply clearing and rebuilding it from scratch is not an option. So how would I typically tackle this? What's the name of the algorithm I should google for? In the past I have implemented this in various ways (see below for an example) but it always felt like there should be a cleaner and more efficient way. Here's an example approach: Iterate over the master list Look up each item in the "slave list" Add items that do not yet exist Somehow keep track of items that already exist in both lists (e.g. by tagging them or keeping yet another list) When done iterate once more over the slave list Remove all objects that have not been tagged (see 4.) Update Thanks for all your responses so far! I will need some time to look at the links. Maybe one more thing worthy of note: In many of the situations where I needed this the implementation of the "master list" is completely hidden from me. In the most extreme cases the only access I might have to the master list might be a COM-interface that exposes nothing but GetFirst-, GetNext-style methods. I'm mentioning this because of the suggestions to either sort the list or to subclass it both of which is unfortunately not practical in these cases unless I copy the elements into a list of my own and I don't think that would be very efficient. I also might not have made it clear enough that the elements in the two lists are of different types, i.e. not assignment-compatible: Especially, the elements in the master list might be available as interface references only.

    Read the article

  • Why does my ko computed observable not update bound UI elements when its value changes?

    - by Allen
    I'm trying to wrap a cookie in a computed observable (which I'll later turn into a protectedObservable) and I'm having some problems with the computed observable. I was under the opinion that changes to the computed observable would be broadcast to any UI elements that have been bound to it. I've created the following fiddle JavaScript: var viewModel = {}; // simulating a cookie store, this part isnt as important var cookie = function () { // simulating a value stored in cookies var privateZipcode = "12345"; return { 'write' : function (val) { privateZipcode = val; }, 'read': function () { return privateZipcode; } } }(); viewModel.zipcode = ko.computed({ read: function () { return cookie.read(); }, write: function (value) { cookie.write(value); }, owner: viewModel }); ko.applyBindings(viewModel);? HTML: zipcode: <input type='text' data-bind="value: zipcode"> <br /> zipcode: <span data-bind="text: zipcode"></span>? I'm not using an observable to store privateZipcode since that's really just going to be in a cookie. I'm hoping that the ko.computed will provide the notifications and binding functionality that I need, though most of the examples I've seen with ko.computed end up using a ko.observable underneath the covers. Shouldn't the act of writing the value to my computed observable signal the UI elements that are bound to its value? Shouldn't these just update? Workaround I've got a simple workaround where I just use a ko.observable along side of my cookie store and using that will trigger the required updates to my DOM elements but this seems completely unnecessary, unless ko.computed lacks the signaling / dependency type functionality that ko.observable has. My workaround fiddle, you'll notice that the only thing that changes is that I added a seperateObservable that isn't used as a store, its only purpose is to signal to the UI that the underlying data has changed. // simulating a cookie store, this part isnt as important var cookie = function () { // simulating a value stored in cookies var privateZipcode = "12345"; // extra observable that isnt really used as a store, just to trigger updates to the UI var seperateObservable = ko.observable(privateZipcode); return { 'write' : function (val) { privateZipcode = val; seperateObservable(val); }, 'read': function () { seperateObservable(); return privateZipcode; } } }(); This makes sense and works as I'd expect because viewModel.zipcode depends on seperateObservable and updates to that should (and does) signal the UI to update. What I don't understand, is why doesn't a call to the write function on my ko.computed signal the UI to update, since that element is bound to that ko.computed? I suspected that I might have to use something in knockout to manually signal that my ko.computed has been updated, and I'm fine with that, that makes sense. I just haven't been able to find a way to accomplish that.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >