Search Results

Search found 2284 results on 92 pages for 'smart zulu'.

Page 83/92 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • await, WhenAll, WaitAll, oh my!!

    - by cibrax
    If you are dealing with asynchronous work in .NET, you might know that the Task class has become the main driver for wrapping asynchronous calls. Although this class was officially introduced in .NET 4.0, the programming model for consuming tasks was much more simplified in C# 5.0 in .NET 4.5 with the addition of the new async/await keywords. In a nutshell, you can use these keywords to make asynchronous calls as if they were sequential, and avoiding in that way any fork or callback in the code. The compiler takes care of the rest. I was yesterday writing some code for making multiple asynchronous calls to backend services in parallel. The code looked as follow, var allResults = new List<Result>(); foreach(var provider in providers) { var results = await provider.GetResults(); allResults.AddRange(results); } return allResults; You see, I was using the await keyword to make multiple calls in parallel. Something I did not consider was the overhead this code implied after being compiled. I started an interesting discussion with some smart folks in twitter. One of them, Tugberk Ugurlu, had the brilliant idea of actually write some code to make a performance comparison with another approach using Task.WhenAll. There are two additional methods you can use to wait for the results of multiple calls in parallel, WhenAll and WaitAll. WhenAll creates a new task and waits for results in that new task, so it does not block the calling thread. WaitAll, on the other hand, blocks the calling thread. This is the code Tugberk initially wrote, and I modified afterwards to also show the results of WaitAll. class Program { private static Func<Stopwatch, Task>[] funcs = new Func<Stopwatch, Task>[] { async (watch) => { watch.Start(); await Task.Delay(1000); Console.WriteLine("1000 one has been completed."); }, async (watch) => { await Task.Delay(1500); Console.WriteLine("1500 one has been completed."); }, async (watch) => { await Task.Delay(2000); Console.WriteLine("2000 one has been completed."); watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds + "ms has been elapsed."); } }; static void Main(string[] args) { Console.WriteLine("Await in loop work starts..."); DoWorkAsync().ContinueWith(task => { Console.WriteLine("Parallel work starts..."); DoWorkInParallelAsync().ContinueWith(t => { Console.WriteLine("WaitAll work starts..."); WaitForAll(); }); }); Console.ReadLine(); } static async Task DoWorkAsync() { Stopwatch watch = new Stopwatch(); foreach (var func in funcs) { await func(watch); } } static async Task DoWorkInParallelAsync() { Stopwatch watch = new Stopwatch(); await Task.WhenAll(funcs[0](watch), funcs[1](watch), funcs[2](watch)); } static void WaitForAll() { Stopwatch watch = new Stopwatch(); Task.WaitAll(funcs[0](watch), funcs[1](watch), funcs[2](watch)); } } After running this code, the results were very concluding. Await in loop work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 4532ms has been elapsed. Parallel work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 2007ms has been elapsed. WaitAll work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 2009ms has been elapsed. The await keyword in a loop does not really make the calls in parallel.

    Read the article

  • Existential CAML - does an item exist?

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved More CAML and existence. In “SharePoint List Issues” and “Passing the CAML thru the EY of the NEEDL we saw how to use CAML to return a subset of a list and also how to check the existence of lists, fields, defaults, and values.   Here is a general function that may be used to get a subset of a list by comparing a “text” type field to a given value.  The function is pretty smart. It can be used to check existence or to return a collection of items that may be further processed. It handles non existing fields and replaces them with the ubiquitous “Title”, but only once!  /// Build an SPQuery that returns a selected set of columns from a List /// titleField must be a "Text" type field /// When the titleField parameter is empty ("") "Title" is assumed /// When the title parameter is empty ("") All is assumed /// When the columnNames parameter is null, the query returns all the fields /// When the rowLimit parameter is 0, the query return all the items. /// with a non-zero, the query returns at most rowLimits /// /// usage: to check if an item titled "Blah" exists in your list, do: /// colNames = {"Title"} /// col = GetListItemColumnByTitle(myList, "", "Blah", colNames, 1) /// Check the col.Count. if > 0 the item exists and is in the collection private static SPListItemCollection GetListItemColumnByTitle(SPList list, string titleField, string title, string[] columnNames, uint rowLimit) {   try   {     char QT = Convert.ToChar((int)34);     SPQuery query = new SPQuery();     if (title != "")     {       string tf = titleField;       if (titleField == "") tf = "Title";       tf = CAMLThisName(list, tf, "Title");        StringBuilder titleQuery = new StringBuilder  ("<Where><Eq><FieldRef Name=");       titleQuery.Append(QT);       titleQuery.Append(tf);       titleQuery.Append(QT);       titleQuery.Append("/><Value Type=");       titleQuery.Append(QT);       titleQuery.Append("Text");       titleQuery.Append(QT);       titleQuery.Append(">");       titleQuery.Append(title);       titleQuery.Append("</Value></Eq></Where>");       query.Query = titleQuery.ToString();     }     if (columnNames.Length != 0)     {       StringBuilder sb = new StringBuilder("");       bool TitleAlreadyIncluded = false;       foreach (string columnName in columnNames)       {         string tst = CAMLThisName(list, columnName, "Title");         //Allow Title only once         if (tst != "Title" || !TitleAlreadyIncluded)         {           sb.Append("<FieldRef Name=");           sb.Append(QT);           sb.Append(tst);           sb.Append(QT);           sb.Append("/>");           if (tst == "Title") TitleAlreadyIncluded = true;         }       }       query.ViewFields = sb.ToString();     }     if (rowLimit > 0)     {        query.RowLimit = rowLimit;     }     SPListItemCollection col = list.GetItems(query);     return col;   }   catch (Exception ex)   {     //Console.WriteLine("GetListItemColumnByTitle" + ex.ToString());     //sw.WriteLine("GetListItemColumnByTitle" + ex.ToString());     return null;   } } Here I called it for a list in which “Author” (it is the internal name for “Created”) and “Blah” do not exist. The list of column names is:  string[] columnNames = {"Test Column1", "Title", "Author", "Allow Multiple Ratings", "Blah"};  So if I use this call, I get all the items for which “01-STD MIL_some” has the value of 1. the fields returned are: “Test Column1”, “Title”, and “Allow Multiple Ratings”. Because “Title” was already included and the default for non exixsting is “Title”, it was not replicated for the 2 non-existing fields.  SPListItemCollection col = GetListItemColumnByTitle(masterList, "01-STD MIL_some", "1", columnNames, 0); The following call checks if there are any items where “01-STD MIL_some” has the value of “1”. Note that I limited the number of returned items to 1.  SPListItemCollection col = GetListItemColumnByTitle(masterList, "01-STD MIL_some", "1", columnNames, 1); The code also uses the CAMLThisName function that checks for an existence of a field and returns its InternalName. This is yet another useful function that I use again and again.  /// <summary> /// return a fields internal name (CAMLName)  /// or the "default" name that you passed. /// To check existence pass "" or some funny name like "mud in your eye" /// </summary> public static string CAMLThisName(SPList list, string name, string def) {   String CAMLName = def;   SPField fld = GetFieldByName(list, name);   if (fld != null)   {      CAMLName = fld.InternalName;   }   return CAMLName; } That’s all folks?!

    Read the article

  • SharePoint 2010 Diagnostic Studio Remote Diag

    - by juanlarios
    I have had some time this week to try out some tools that I have been meaning to try out. This week I am trying out the SP 2010 Diagnostic Studio. I installed it successfully and tried it on my development evironment. I was able to build a report and a snapshot of the environment. I decided to turn my attention to my Employer's intranet environment. This would allow me to analyze it and measure it against benchmarks. I didn't want to install the Diagnostic studio on the Production Envorinment, lucky for me, the Diagnostic studio can be run remotely, well...kind of. Issue My development environment is a stand alone, full installation of SharePoint 2010 Server. It has Office 2010, SQL 2008 Enterprise, a DC...well you get the point, it's jammed packed! But more importantly it's a stand alone, self contained VM environment. Well Microsoft has instructions as to how to connect remotely with Diagnostic Studio here. The deciving part of this is that the SP2010DS prompts you for credentails. So I thought I was getting the right account to run the reports. I tried all the Power Shell commands in the link above but I still ended up getting the following errors: 06/28/2011 12:50:18    Connecting to remote server failed with the following error message : The WinRM client cannot process the request...If the SPN exists, but CredSSP cannot use Kerberos to validate the identity of the target computer and you still want to allow the delegation of the user credentials to the target computer, use gpedit.msc and look at the following policy: Computer Configuration -> Administrative Templates -> System -> Credentials Delegation -> Allow Fresh Credentials with NTLM-only Server Authentication.  Verify that it is enabled and configured with an SPN appropriate for the target computer. For example, for a target computer name "myserver.domain.com", the SPN can be one of the following: WSMAN/myserver.domain.com or WSMAN/*.domain.com. Try the request again after these changes. For more information, see the about_Remote_Troubleshooting Help topic. 06/28/2011 12:54:47    Access to the path '\\<targetserver>\C$\Users\<account logging in>\AppData\Local\Temp' is denied. You might also get an error message like this: The WinRM client cannot process the request. A computer policy does not allow the delegation of the user credentials to the target computer. Explanation After looking at the event logs on the target environment, I noticed that there were a several Security Exceptions. After looking at the specifics around who was denied access, I was able to see the account that was being denied access, it was the client machine administrator account. Well of course that was never going to work!!! After some quick Googling, the last error message above will lead you to edit the Local Group Policy on the client server. And although there are instructions from microsoft around doing this, it really will not work in this scenario. Notice the Description and how it only applices to authentication mentioned? Resolution I can tell you what I did, but I wish there was a better way but I simply don't know if it's duable any other way. Because my development environment had it's own DC, I didn't really want to mess with Kerberos authentication. I would also not be smart to connect that server to the domain, considering it has it's own DC. I ended up installing SharePoint 2010 Diagnostic Studio on another Windows 7 Dev environment I have, and connected the machien to the domain. I ran all the necesary remote credentials commands mentioned here. Those commands add the group policy for you! Once I did this I was able to authenticate properly and I was able to get the reports. Conclusion   You can run SharePoint 2010 Diagnostic Studio Remotely but it will require some specific scenarions. A couple of things I should mention is that as far as I understand, SP2010 DS, will install agents on your target environment to run tests and retrieve the data. I was a Farm Administrator, and also a Server Admin on SharePoint Server. I am not 100% sure if you need all those permissions but I that's just what I have to my internal intranet.   I deally I would like to have a machine that I can have SharePoint 2010 DIagnostic Studio installed and I can run that against client environments. It appears that I will not be able to do that, unless I enable Kerberos on my Windows 7 Machine now. If you have it installed in the same way I would like to have it, please let me know, I'll keep trying to get what I'm after. Hope this helps someone out there doing the same.

    Read the article

  • Identity Globe Trotters (Sep Edition): The Social Customer

    - by Tanu Sood
    Welcome to the inaugural edition of our monthly series - Identity Globe Trotters. Starting today, the last Friday of every month, we will explore regional commentary on Identity Management. We will invite guest contributors from around the world to share their opinions and experiences around Identity Management and highlight regional nuances, specific drivers, solutions and more. Today's feature is contributed by Michael Krebs, Head of Business Development at esentri consulting GmbH, a (SOA) specialized Oracle Gold Partner based in Ettlingen, Germany. In his current role, Krebs is dealing with the latest developments in Enterprise Social Networking and the Integration of Social Media within business processes.  By Michael Krebs The relevance of "easy sign-on" in the age of the "Social Customer" With the growth of Social Networks, the time people spend within those closed "eco-systems" is growing year by year. With social networks looking to integrate search engines, like Facebook announced some weeks ago, their relevance will continue to grow in contrast to the more conventional search engines. This is one of the reasons why social network accounts of the users are getting more and more like a virtual fingerprint. With the growing relevance of social networks the importance of a simple way for customers to get in touch with say, customer care or contract departments, will be crucial for sales processes in critical markets. Customers want to have one single point of contact and also an easy "login-method" with no dedicated usernames, passwords or proprietary accounts. The golden rule in the future social media driven markets will be: The lower the complexity of the initial contact, the better a company can profit from social networks. If you, for example, can generate a smart way of how an existing customer can use self-service portals, the cost in providing phone support can be lowered significantly. Recruiting and Hiring of "Digital Natives" Another particular example is "social" recruiting processes. The so called "digital natives" don´t want to type in their profile facts and CV´s in proprietary systems. Why not use the actual LinkedIn profile? In German speaking region, the market in the area of professional social networks is dominated by XING, the equivalent to LinkedIn. A few weeks back, this network also opened up their interfaces for integrating social sign-ons or the usage of profile data for recruiting-purposes. In the European (and especially the German) employment market, where the number of young candidates is shrinking because of the low birth rate in the region, it will become essential to use social-media supported hiring processes to find and on-board the rare talents. In fact, you will see traditional recruiting websites integrated with social hiring to attract the best talents in the market, where the pool of potential candidates has decreased dramatically over the years. Identity Management as a key factor in the Customer Experience process To create the biggest value for customers and also future employees, companies need to connect their HCM or CRM-systems with powerful Identity management solutions. With the highly efficient Oracle (social & mobile enabling) Identity Management solution, enterprises can combine easy sign on with secure connections to the backend infrastructure. This combination enables a "one-stop" service with personalized content for customers and talents. In addition, companies can collect valuable data for the enrichment of their CRM-data. The goal is to enrich the so called "Customer Experience" via all available customer channels and contact points. Those systems have already gained importance in the B2C-markets and will gradually spread out to B2B-channels in the near future. Conclusion: Central and "Social" Identity management is key to Customer Experience Management and Talent Management For a seamless delivery of "Customer Experience Management" and a modern way of recruiting the best talent, companies need to integrate Social Sign-on capabilities with modern CX - and Talent management infrastructure. This lowers the barrier for existing and future customers or employees to get in touch with sales, support or human resources. Identity management is the technology enabler and backbone for a modern Customer Experience Infrastructure. Oracle Identity management solutions provide the opportunity to secure Social Applications and connect them with modern CX-solutions. At the end, companies benefit from "best of breed" processes and solutions for enriching customer experience without compromising security. About esentri: esentri is a provider of enterprise social networking and brings the benefits of social network communication into business environments. As one key strength, esentri uses Oracle Identity Management solutions for delivering Social and Mobile access for Oracle’s CRM- and HCM-solutions. …..End Guest Post…. With new and enhanced features optimized to secure the new digital experience, the recently announced Oracle Identity Management 11g Release 2 enables organizations to securely embrace cloud, mobile and social infrastructures and reach new user communities to help further expand and develop their businesses. Additional Resources: Oracle Identity Management 11gR2 release Oracle Identity Management website Datasheet: Mobile and Social Access (pdf) IDM at OOW: Focus on Identity Management Facebook: OracleIDM Twitter: OracleIDM We look forward to your feedback on this post and welcome your suggestions for topics to cover in Identity Globe Trotters. Last Friday, every month!

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 2: Platforms and Features

    - by Tim Murphy
    This is part 2 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. In the previous post I discussed what reasons a company might have for creating a smartphone application.  In this installment I will cover some of history and state of the different platforms as well as features that can be leveraged for building enterprise smartphone applications. Platforms Before you start choosing a platform to develop your solutions on it is good to understand how we got here and what features you can leverage. History To my memory we owe all of this to a product called the Apple Newton that came out in 1987. It was the first PDA and back then I was much more of an Apple fan.  I was very impressed with this device even though it never really went anywhere.  The Palm Pilot by US Robotics was the next major advancement in PDA. It had a simple short hand window that allowed for quick stylus entry.. Later, Windows CE came out and started the broadening of the PDA market. After that it was the Palm and CE operating systems that started showing up on cell phones and for some time these were the two dominant operating systems that were distributed with devices from multiple hardware vendors. Current The iPhone was the first smartphone to take away the stylus and give us a multi-touch interface.  It was a revolution in usability and really changed the attractiveness of smartphones for the general public.  This brought us to the beginning of the current state of the market with the concept of an online store that makes it easy for customers to get new features and functionality on demand. With Android, Google made this more than a one horse race.  Not only did they come to compete, their low cost actually made them the leading OS.  Of course what made Android so attractive also is its major fault.  It is so open that it has been a target for malware which leaves consumers exposed.  Fortunately for Google though, most consumers aren’t aware of the threat that they are under. Although Microsoft had put out one of the first smart phone operating systems with CE it had to play catch up and finally came out with the Windows Phone.  They have gone for a market approach between those of iOS and Android.  They support multiple hardware vendors like Google, but they kept a certification process for applications that is similar to Apple.  They also created a user interface that was different enough to give it a clear separation from the other two platforms. The result of all this is hundreds of millions of smartphones being sold monthly across all three platforms giving us a wide range of choices and challenges when it comes to developing solutions. Features So what are the features that make these devices flexible enough be considered for use in the enterprise? The biggest advantage of today's devices is network connectivity.  The ability to access information from multiple sources at a moment’s notice is critical for businesses.  Add to that the ability to communicate over a variety of text, voice and video modes and we have a powerful starting point. Every smartphone has a cameras and they are not just useful for posting to Instagram. We are seeing more applications such as Bing vision that allow us to scan just about any printed code or text to find information.  These capabilities have been made available to developers in the form of standard libraries for reading barcodes of just about an flavor and optical character recognition (OCR) interpretation. Bluetooth give us the ability to communicate with multiple devices. Whether these are headsets, keyboard or printers the wireless communication capabilities are just starting to evolve.  The more these wireless communication protocols grow, the more opportunities we will see to transfer data between users and a variety of devices. Local storage of information that can be called up even when the device cannot reach the network is the other big capability.  This give users the ability to work offline as well and transmit information when connections are restored. These are the tools that we have to work with to build applications that can be leveraged to gain a competitive advantage for companies that implement them. Coming Up In the third installment I will cover key concerns that you face when building enterprise smartphone apps. del.icio.us Tags: smartphones,enterprise smartphone Apps,architecture,iOS,Android,Windows Phone

    Read the article

  • IE9, LightSwitch Beta 2 and Zune HD: A Study in Risk Management?

    - by andrewbrust
    Photo by parl, 'Risk.’ Under Creative Commons Attribution-NonCommercial-NoDerivs License This has been a busy week for Microsoft, and for me as well.  On Monday, Microsoft launched Internet Explorer 9 at South by Southwest (SXSW) in Austin, TX.  That evening I flew from New York to Seattle.  On Tuesday morning, Microsoft launched Visual Studio LightSwitch, Beta 2 with a Go-Live license, in Redmond, and I had the privilege of speaking at the keynote presentation where the announcement was made.  Readers of this blog know I‘m a fan of LightSwitch, so I was happy to tell the app dev tools partners in the audience that I thought the LightSwitch extensions ecosystem represented a big opportunity – comparable to the opportunity when Visual Basic 1.0 was entering its final beta roughly 20 years ago.  On Tuesday evening, I flew back to New York (and wrote most of this post in-flight). Two busy, productive days.  But there was a caveat that impacts the accomplishments, because Monday was also the day reports surfaced from credible news agencies that Microsoft was discontinuing its dedicated Zune hardware efforts.  While the Zune brand, technology and service will continue to be a component of Windows Phone and a piece of the Xbox puzzle as well, speculation is that Microsoft will no longer be going toe-to-toe with iPod touch in the portable music player market. If we take all three of these developments together (even if one of them is based on speculation), two interesting conclusions can reasonably be drawn, one good and one less so. Microsoft is doubling down on technologies it finds strategic and de-emphasizing those that it does not.  HTML 5 and the Web are strategic, so here comes IE9, and it’s a very good browser.  Try it and see.  Silverlight is strategic too, as is SQL Server, Windows Azure and SQL Azure, so here comes Visual Studio LightSwitch Beta 2 and a license to deploy its apps to production.  Downloads of that product have exceeded Microsoft’s projections by more than 50%, and the company is even citing analyst firms’ figures covering the number of power-user developers that might use it. (I happen to think the product will be used by full-fledged developers as well, but that’s a separate discussion.) Windows Phone is strategic too…I wasn’t 100% positive of that before, but the Nokia agreement has made me confident.  Xbox as an entertainment appliance is also strategic.  Standalone music players are not strategic – and even if they were, selling them has been a losing battle for Microsoft.  So if Microsoft has consolidated the Zune content story and the ZunePass subscription into Xbox and Windows Phone, it would make sense, and would be a smart allocation of resources.  Essentially, it would be for the greater good. But it’s not all good.  In this scenario, Zune player customers would lose out.  Unless they wanted to switch to Windows Phone, and then use their phone’s battery for the portable media needs, they’re going to need a new platform.  They’re going to feel abandoned.  Even if Zune lives, there have been other such cul de sacs for customers.  Remember SPOT watches?  Live Spaces?  The original Live Mesh?  Microsoft discontinued each of these products.  The company is to be commended for cutting its losses, as admitting a loss isn’t easy.  But Redmond won’t be well-regarded by the victims of those decisions.  Instead, it gets black marks. What’s the answer?  I think it’s a bit like the 1980’s New York City “don’t block the box” gridlock rules: don’t enter an intersection unless you see a clear path through it.  If the light turns red and you’re blocking the perpendicular traffic, that’s your fault in judgment.  You get fined and get points on your license and you don’t get to shrug it off as beyond your control.  Accountability is key.  The same goes for Microsoft.  If it decides to enter a market, it should see a reasonable path through success in that market. Switching analogies, Microsoft shouldn’t make investments haphazardly, and it certainly shouldn’t ask investors to buy into a high-risk fund that is sold as safe and which offers only moderate returns.  People won’t continue to invest with a fund manager with a track record of over-zealous, imprudent, sub-prime investments.  The same is true on the product side for Microsoft, and not just with music players and geeky wrist watches.  It’s true of Web browsers, and line-of-business app dev tools, and smartphones, and cloud platforms and operating systems too.  When Microsoft is casual about its own risk, it raises risk for its customers, and weakens its reputation, market share and credibility.  That doesn’t mean all risk is bad, but it does mean no product team’s risk should be taken lightly. For mutual fund companies, it’s the CEO’s job to give his fund managers autonomy, but to make sure they’re conforming to a standard of rational risk management.  Because all those funds carry the same brand, and many of them serve the same investors. The same goes for Microsoft, its product portfolio, its executive ranks and its product managers.

    Read the article

  • What's New in SGD 5.1?

    - by Fat Bloke
    Oracle announced the latest version of Secure Global Desktop (SGD) this week with 3 major themes: Support for Android devices; Support for Desktop Chrome clients;  Support for Oracle Unified Directory. I'll talk about the new features in a moment, but a bit of context first: Oracle SGD - what, how and why?  Oracle Secure Global Desktop is Oracle's secure remote access product which allows users on almost any device, to access almost any type application which  is hosted in the data center, from almost any location. And it does this by sitting on the edge of the datacenter, between the user and the applications: This is actually a really smart environment for an increasing number of use cases where: Users need mobility of location AND device (i.e. work from anywhere); IT needs to ensure security of applications and data (of course!) The application requires an end-user environment which can't be guaranteed and IT may not own the client platform (e.g. BYOD, working from home, partners or contractors). Oracle has a a specific interest in this of course. As the leading supplier of enterprise applications, many of Oracle's customers, and indeed Oracle itself, fit these criteria. So, as an IT guy rolling out an application to your employees, if one of your apps absolutely needs, say,  IE10 with Java 6 update 32, how can you be sure that the user population has this, especially when they're using their own devices? In the SGD model you, the IT guy, can set up, say, a Windows Server running the exact environment required, and then use SGD to publish this app, without needing to worry any further about the device the end user is using. What's new?  So back to SGD 5.1 and what is new there: Android devices Since we introduced our support for iPad tablets in SGD 5.0 we've had a big demand from customers to extend this to Android tablets too, and so we're pleased to announce that 5.1 supports Android 4.x tablets such as Nexus 7 and 10, and the Galaxy Tab. Here's how it works, with screenshots from my Nexus 7: Simply point your browser to the SGD server URL and login; The workspace is the list of apps that the admin has deemed ok for you to run. You click on an application to run it (here's Excel and Oracle E-Business Suite): There's an extended on-screen keyboard (extended because desktop apps need keys that don't appear on a tablet keyboard such as ctrl, WIndow key, etc) and touch gestures can be mapped to desktop events (such as tap and hold to right click) All in all a pretty nice implementation for Android tablet users. Desktop Chrome Browsers SGD has always been designed around using a browser to access your applications. But traditionally, this has involved using Java to deliver the SGD client component. With HTML5 and Javascript engines becoming so powerful, we thought we'd see how well a pure web client could perform with desktop apps. And the answer was, surprisingly well. So with this release we now offer this additional way of working, which can be enabled by a simple bit of configuration. Here's a Linux desktop running in a tab in Chrome. And if you resize the browser window, the Linux desktop is resized by SGD too. Very cool! Oracle Unified Directory As I mentioned above, a lot of Oracle users already benefit from SGD. And a lot of Oracle customers use Oracle Unified Directory as their Enterprise and Carrier grade user directory. So it makes a lot of sense that SGD now supports this LDAP directory for both Authentication and as a means to determine which users get which applications, e.g. publish the engineering app to the guys in the Development group, but give everyone E-Business Suite to let them do their expenses. Summary With new devices, and faster 4G networking becoming more prevalent, the pressure for businesses to move to a increasingly mobile enterprise is stronger than ever. SGD is good for users, and even better for IT. By offering the user the ability to work from anywhere, and IT the control and security they need, everyone wins with SGD. To try this for yourself, download SGD 5.1 (look under Desktop Virtualization Products) from the Oracle Software Delivery Cloud or if you're an existing customer, get it from My Oracle Support.  -FB 

    Read the article

  • Disable Windows 8 edge gestures/hot corners for multi-touch applications while running in full screen

    - by Bondye
    I have a full screen AS3 game maby with Adobe AIR that runs in Windows 7. In this game it may not be easy to exit (think about kiosk mode, only exit by pressing esc and enter a password). Now I want this game to run in Windows 8. The game is working like expected but the anoying things are these edge gestures/hot corners (left, top, right, bottom) and the shortcuts. I've read articles but none helped me. People talk about registery edits, but I dont get this working + the user needs to restart his/hers computer. I want to open my game, turn off gestures/hot corners and when the game closes the gestures/hot corners need to come back available again. I have seen some applications doing the same what I want to accomplish. I found this so I am able to detect the gestures. But how to ignore they're actions? I also read ASUS Smart Gestures but this is for the touch-pad. And I have tried Classic Shell but I need to disable the edge gestures/hot corners without such programs, just on-the-fly. I also found this but I don't know how to implement this. HRESULT SetTouchDisableProperty(HWND hwnd, BOOL fDisableTouch) { IPropertyStore* pPropStore; HRESULT hrReturnValue = SHGetPropertyStoreForWindow(hwnd, IID_PPV_ARGS(&pPropStore)); if (SUCCEEDED(hrReturnValue)) { PROPVARIANT var; var.vt = VT_BOOL; var.boolVal = fDisableTouch ? VARIANT_TRUE : VARIANT_FALSE; hrReturnValue = pPropStore->SetValue(PKEY_EdgeGesture_DisableTouchWhenFullscreen, var); pPropStore->Release(); } return hrReturnValue; } Does anyone know how I can do this? Or point me into the right direction? I have tried some in C# and C++, but I aint a skilled C#/C++ developer. Also the game is made in AS3 so it will be hard to implement this in C#/C++. I work on the Lenovo aio (All in one) with Windows 8.

    Read the article

  • Form submit to target iframe only works once

    - by Pointy
    I'm having a really irritating problem doing something that I'm sure I've done before. The setup is this: There's a form which is submitted via a "click" handler on a button, though the submit is ultimately a simple call to the form's native submit() function. The form's target is an iframe, which is initially filled with an empty page (that is, its source object is an URL for an empty page). That is, the "target" attribute of the form is the same identifier (and yes, it's a valid identifier) as the "name" and "id" attributes of the iframe (and yes, it's unique) Now the goal of this setup is as follows: the form contains file upload fields. Should something be wrong with one of the files uploaded, the server will report back an error. If the error response (that is, the html page with a new copy of the form, along with appropriate error messages) were to be allowed to reload the original window, then the file input fields would be cleared. That's not good. Thus the form submits to the iframe, so that the response from the server can be a small page that knows it's in that hidden iframe, and knows to move the error messages up to the form. One more thing: the form itself is also in an iframe, as part of a popup modal dialog (like a jQuery UI dialog, only slightly different; same idea though). The setup works fine - on the first submit. In other words, if I supply nice happy files to upload, the server ships back the successful response, and the dialog is closed correctly. If I send a bogus file, the server responds with an error page that correctly copies its stuff up to the form page. On the second submit, however (like, if I fix the bogus file input field), the browser insists on sending the server response to a new browser tab. As far as I can tell, the form's "target" remains correct, the iframe "name" and "id" attributes aren't changed, and I even make sure to update the hidden iframe "window.name" and "window.id" values. None of that is helping; I always get a new browser tab. I'm trying to set up a slightly simpler test case to see if I can rule out some of my framework code (the stuff that does the submit), though via a few console.log() calls I think that stuff is OK; it's certainly OK on all the other dialogs etc. in the site. When/if I create a simpler version I'll post it. In the meantime, if any of you insanely smart people recognize this situation and know of a trick to make it work, I'd be really thankful. I see the same behavior in both Firefox (3.6) and Chrome, so it's got to be my problem and not a browser quirk (well, at least that's what I think to be true).

    Read the article

  • What IDE to use for Python

    - by husayt
    As a Python newbie, it is interesting to know what IDE's ("GUIs/editors") others use for Python coding. If you can just give the name (e.g. Textpad, Eclipse ..) that will be enough. If it is already mentioned, you can just vote for it. But if you can also give some more comparative information, that will be much appreciated. Thanks. Update: Results so far PyDev with Eclipse (CP, F, AC, PD, EM, SI, MLS, UML, SC, UT, LN, CF, BM) Komodo (CP, C/F, MLS, PD, AC, SC, SI, BM, LN, CF, CT) Emacs (CP, F, AC, MLS, PD, EM, SC, SI, BM, LN, CF, CT, UT, UML) Vim (CP, F, AC, MLS, SI, BM, LN, CF ) TextMate (Mac, CT, CF, MLS, SI, BM, LN) Gedit (Linux, F, AC, MLS, BM, LN, CT [sort of]) Idle (CP, F, AC) PIDA (Linux, CP, F, AC, MLS, SI, BM, LN, CF)(VIM Based) NotePad++ (Windows) BlueFish (Linux) JEdit (CP, F, BM, LN, CF, MLS) E-Texteditor (TextMate Clone for Windows) WingIde (CP, C, AC, MLS (support for C), PD, EM, SC, SI, BM, LN, CF, CT, UT) Eric Ide (CP, F, AC, PD, EM, SI, LN, CF, UT) Pyscripter (Windows, F, AC, PD, EM, SI, LN, CT, UT) ConTEXT (Windows, C) SPE (F, AC, UML) SciTE (CP, F, MLS, EM, BM, LN, CF, CT, SH) Zeus (W, C, BM, LN, CF, SI, SC, CT) NetBeans (CP, F, PD, UML, AC, MLS, SC, SI, BM, LN, CF, CT, UT, RAD) DABO (CP) BlackAdder (C, CP, CF, SI) PythonWin (W, F, AC, PD, SI, BM, CF) Geany (CP, F, very limited AC, MLS, SI, BM, LN, CF) UliPad (CP, F, AC, PD, MLS, SI, LI, CT, UT, BM) Boa Constructor (CP, F, AC, PD, EM, SI, BM, LN, UML, CF, CT) ScriptDev (W, C, AC, MLS, PD, EM, SI, BM, LN, CF, CT) Spider (CP, F, AC) Editra (CP, F, AC, MLS, SC, SI, BM, LN, CF) Pfaide (Windows, C, AC, MLS, SI, BM, LN, CF, CT) KDevelop (CP, F, MLS, SC, SI, BM, LN, CF) Acronyms used: CP - Cross Platfom C - Commercial F - Free AC - Automatic Code-completion MLS - Multi-Language Support PD - Integrated Python Debugging EM - ErrorMarkup SC - Source Control integration SI - Smart Indent BM - Bracket Matching LN - Line Numbering UML - UML editing / viewing CF - Code Folding CT - Code Templates UT - Unit Testing UID - Gui Designer (e.g. QT, Eric, ..) DB - integrated database support RAD - Rapid app development support I don't mention basics like Syntax highlighting as I expect these by default. This is a just dry list reflecting your feedback and comments, I am not advocating any of these tools. I will keep updating this list as you keep posting your answers. PS. Can you help me to add features of the above editors to the list (like autocomplete, debugging, or etc)?

    Read the article

  • XML to be validated against multiple xsd schemas

    - by Michael Rusch
    I'm writing the xsd and the code to validate, so I have great control here. I would like to have an upload facility that adds stuff to my application based on an xml file. One part of the xml file should be validated against different schemas based on one of the values in the other part of it. Here's an example to illustrate: <foo> <name>Harold</name> <bar>Alpha</bar> <baz>Mercury</baz> <!-- ... more general info that applies to all foos ... --> <bar-config> <!-- the content here is specific to the bar named "Alpha" --> </bar-config> <baz-config> <!-- the content here is specific to the baz named "Mercury" --> </baz> </foo> In this case, there is some controlled vocabulary for the content of <bar>, and I can handle that part just fine. Then, based on the bar value, the appropriate xml schema should be used to validate the content of bar-config. Similarly for baz and baz-config. The code doing the parsing/validation is written in Java. Not sure how language-dependent the solution will be. Ideally, the solution would permit the xml author to declare the appropriate schema locations and what-not so that s/he could get the xml validated on the fly in a sufficiently smart editor. Also, the possible values for <bar> and <baz> are orthogonal, so I don't want to do this by extension for every possible bar/baz combo. What I mean is, if there are 24 possible bar values/schemas and 8 possible baz values/schemas, I want to be able to write 1 + 24 + 8 = 33 total schemas, instead of 1 * 24 * 8 = 192 total schemas. Also, I'd prefer to NOT break out the bar-config and baz-config into separate xml files if possible. I realize that might make all the problems much easier, as each xml file would have a single schema, but I'm trying to see if there is a good single-xml-file solution.

    Read the article

  • HTML Purifier: Removing an element conditionally based on its attributes

    - by pinkgothic
    As per the HTML Purifier smoketest, 'malformed' URIs are occasionally discarded to leave behind an attribute-less anchor tag, e.g. <a href="javascript:document.location='http://www.google.com/'">XSS</a> becomes <a>XSS</a> ...as well as occasionally being stripped down to the protocol, e.g. <a href="http://1113982867/">XSS</a> becomes <a href="http:/">XSS</a> While that's unproblematic, per se, it's a bit ugly. Instead of trying to strip these out with regular expressions, I was hoping to use HTML Purifier's own library capabilities / injectors / plug-ins / whathaveyou. Point of reference: Handling attributes Conditionally removing an attribute in HTMLPurifier is easy. Here the library offers the class HTMLPurifier_AttrTransform with the method confiscateAttr(). While I don't personally use the functionality of confiscateAttr(), I do use an HTMLPurifier_AttrTransform as per this thread to add target="_blank" to all anchors. // more configuration stuff up here $htmlDef = $htmlPurifierConfiguration->getHTMLDefinition(true); $anchor = $htmlDef->addBlankElement('a'); $anchor->attr_transform_post[] = new HTMLPurifier_AttrTransform_Target(); // purify down here HTMLPurifier_AttrTransform_Target is a very simple class, of course. class HTMLPurifier_AttrTransform_Target extends HTMLPurifier_AttrTransform { public function transform($attr, $config, $context) { // I could call $this->confiscateAttr() here to throw away an // undesired attribute $attr['target'] = '_blank'; return $attr; } } That part works like a charm, naturally. Handling elements Perhaps I'm not squinting hard enough at HTMLPurifier_TagTransform, or am looking in the wrong place(s), or generally amn't understanding it, but I can't seem to figure out a way to conditionally remove elements. Say, something to the effect of: // more configuration stuff up here $htmlDef = $htmlPurifierConfiguration->getHTMLDefinition(true); $anchor = $htmlDef->addElementHandler('a'); $anchor->elem_transform_post[] = new HTMLPurifier_ElementTransform_Cull(); // add target as per 'point of reference' here // purify down here With the Cull class extending something that has a confiscateElement() ability, or comparable, wherein I could check for a missing href attribute or a href attribute with the content http:/. HTMLPurifier_Filter I understand I could create a filter, but the examples (Youtube.php and ExtractStyleBlocks.php) suggest I'd be using regular expressions in that, which I'd really rather avoid, if it is at all possible. I'm hoping for an onboard or quasi-onboard solution that makes use of HTML Purifier's excellent parsing capabilities. Returning null in a child-class of HTMLPurifier_AttrTransform unfortunately doesn't cut it. Anyone have any smart ideas, or am I stuck with regexes? :)

    Read the article

  • Tree Node Checked behavior on a TreeView in Compact Framework 3.5 running on Windows Mobile 6.5

    - by Hydroslide
    I have been upgrading an existing .NET Windows Mobile application to use the 3.5 version of the compact framework and to run on Windows Mobile 6.5. I have a form with a TreeView. The TreeView.Checkboxes property is set to true so that each node has a check box. This gives no trouble in all previous versions of Windows Mobile. However, in version 6.5 when you click on a check box it appears to check and then uncheck instantaneously. But it only raises the AfterCheck event once. The only way I can get a check to stick is by double clicking it (which is the wrong behavior). Has anyone seen this behavior? Does anyone know of a workaround for it? I have included a simple test form. Dump this form into a Visual Studio 2008 Smart Device application targeted at Windows Mobile 6 to see what I mean. Public Class frmTree Inherits System.Windows.Forms.Form #Region " Windows Form Designer generated code " Public Sub New() MyBase.new() ' This call is required by the Windows Form Designer. InitializeComponent() ' Add any initialization after the InitializeComponent() call. End Sub 'Form overrides dispose to clean up the component list. <System.Diagnostics.DebuggerNonUserCode()> _ Protected Overrides Sub Dispose(ByVal disposing As Boolean) If disposing AndAlso components IsNot Nothing Then components.Dispose() End If MyBase.Dispose(disposing) End Sub 'Required by the Windows Form Designer Private components As System.ComponentModel.IContainer Friend WithEvents TreeView1 As System.Windows.Forms.TreeView Private mainMenu1 As System.Windows.Forms.MainMenu 'NOTE: The following procedure is required by the Windows Form Designer 'It can be modified using the Windows Form Designer. 'Do not modify it using the code editor. <System.Diagnostics.DebuggerStepThrough()> _ Private Sub InitializeComponent() Dim TreeNode1 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node0") Dim TreeNode2 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node2") Dim TreeNode3 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node3") Dim TreeNode4 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node4") Dim TreeNode5 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node1") Dim TreeNode6 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node5") Dim TreeNode7 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node6") Dim TreeNode8 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node7") Me.mainMenu1 = New System.Windows.Forms.MainMenu Me.TreeView1 = New System.Windows.Forms.TreeView Me.SuspendLayout() ' 'TreeView1 ' Me.TreeView1.CheckBoxes = True Me.TreeView1.Location = New System.Drawing.Point(37, 41) Me.TreeView1.Name = "TreeView1" TreeNode2.Text = "Node2" TreeNode3.Text = "Node3" TreeNode4.Text = "Node4" TreeNode1.Nodes.AddRange(New System.Windows.Forms.TreeNode() {TreeNode2, TreeNode3, TreeNode4}) TreeNode1.Text = "Node0" TreeNode6.Text = "Node5" TreeNode7.Text = "Node6" TreeNode8.Text = "Node7" TreeNode5.Nodes.AddRange(New System.Windows.Forms.TreeNode() {TreeNode6, TreeNode7, TreeNode8}) TreeNode5.Text = "Node1" Me.TreeView1.Nodes.AddRange(New System.Windows.Forms.TreeNode() {TreeNode1, TreeNode5}) Me.TreeView1.Size = New System.Drawing.Size(171, 179) Me.TreeView1.TabIndex = 0 ' 'frmTree ' Me.AutoScaleDimensions = New System.Drawing.SizeF(96.0!, 96.0!) Me.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Dpi Me.AutoScroll = True Me.ClientSize = New System.Drawing.Size(240, 268) Me.Controls.Add(Me.TreeView1) Me.Menu = Me.mainMenu1 Me.Name = "frmTree" Me.Text = "frmTree" Me.ResumeLayout(False) End Sub #End Region End Class

    Read the article

  • Creating a variable list Pashua, OS X & Bash.

    - by S1syphus
    First of all, for those that don't know pashua is a tool for creating native Aqua dialog windows. An example of what a window config looks like is # pashua_run() # Define what the dialog should be like # Take a look at Pashua's Readme file for more info on the syntax conf=" # Set transparency: 0 is transparent, 1 is opaque *.transparency=0.95 # Set window title *.title = Introducing Pashua # Introductory text tb.type = text tb.default = "HELLO WORLD" tb.height = 276 tb.width = 310 tb.x = 340 tb.y = 44 if [ -e "$icon" ] then # Display Pashua's icon conf="$conf img.type = image img.x = 530 img.y = 255 img.path = $icon" fi if [ -e "$bgimg" ] then # Display background image conf="$conf bg.type = image bg.x = 30 bg.y = 2 bg.path = $bgimg" fi pashua_run "$conf" echo " tb = $tb" The problem is, Pashua can't really get output from stdout, but it can get arguments. Following on from what Dennis Williamson posted here. What ideally it should do is generate an output file based on information from a text file, To executed in pashua_run ore add the pashua_run around the window argument: count=1 while read -r i do echo "AB${count}.type = openbrowser" echo "AB${count}.label = Choose a master playlist file" echo "AB${count}.width=310" echo "AB${count}.tooltip = Blabla filesystem browser" echo "some text with a line from the file: $i" (( count++ )) done < TEST.txt >> long.txt SO the output is AB1.type = openbrowser AB1.label = Choose a master playlist file AB1.width=310 AB1.tooltip = Blabla filesystem browser some text with a line from the file: foo AB2.type = openbrowser AB2.label = Choose a master playlist file AB2.width=310 AB2.tooltip = Blabla filesystem browser some text with a line from the file: bar AB3.type = openbrowser AB3.label = Choose a master playlist file AB3.width=310 AB3.tooltip = Blabla filesystem browser some text with a line from the file: dev AB4.type = openbrowser AB4.label = Choose a master playlist file AB4.width=310 AB4.tooltip = Blabla filesystem random So if there is a clever way to get the output of that and place it into pashua run would be cool, on the fly: I.E load te contents of TEST.txt and generate the place it into pashua_run, I've tried using cat and opening the file... but because it's in Pashua_run it doesn't work, is there a smart way to break out then back in? Or the second way which I was thinking, was create get the output then append it into the middle text file containing the pashua runtime, then execute it, maybe slightly hacky, but I would imagine it will do the job. Any ideas? ++ I know I probably could make my life a lot easier, by doing this in actionscript and cocoa, although at present don't have time for such a learning curve, although I do plan to get round to it.

    Read the article

  • PHP-based LaTeX parser -- where to begin?

    - by Alex Basson
    The project: I want to build a LaTeX-to-MathML translator in PHP. Why? Because I'm a mathematician, and I want to publish math on my Drupal site. It doesn't have to translate all of LaTeX, since the basic document-level stuff is ably handled by the CMS and wouldn't be written in LaTeX to begin with; it just has to translate math written in LaTeX into math written in MathML. Although I feel as though I've done my due diligence, this doesn't seem to exist already. Maybe I'm wrong---if you know of something that would serve this purpose, by all means let me know, and thank you in advance. But assuming it doesn't exist, I guess I have to go write it myself. Here's the thing, though: I've never done anything this ambitious. I don't really know where to begin. I've used PHP for years, but just to do the standard "build a CMS with PHP and MySQL"-type of stuff. I've never attempted anything as seemingly sophisticated as translation from one language to another. I'm just dumb enough to consider doing it with regex---after all, LaTeX is a much more formal language, and it doesn't allow for nearly the kinds of pathological edge-cases, as say, HTML. But on the other hand, I'm just smart enough to realize this is probably a terrible idea: now I have two problems, and I sure don't want to end up like this guy. So if that's not the way to go (right?), what is? How should I start thinking about this problem? Am I essentially writing a LaTeX compiler in PHP, and if so, what do I need to know to do that (like, should I just go read the Purple Dragon book first?)? I'm both really excited and pretty intimidated by the prospect of this project, but hey, this is how we all learn to be programmers, right? If something we need doesn't exist, we go and build it, necessity is the mother of... you get the point. Tremendous thanks to everyone in advance for any and all guidance you can offer.

    Read the article

  • Asynchronous vs Synchronous vs Threading in an iPhone App

    - by Coocoo4Cocoa
    I'm in the design stage for an app which will utilize a REST web service and sort of have a dilemma in as far as using asynchronous vs synchronous vs threading. Here's the scenario. Say you have three options to drill down into, each one having its own REST-based resource. I can either lazily load each one with a synchronous request, but that'll block the UI and prevent the user from hitting a back navigation button while data is retrieved. This case applies almost anywhere except for when your application requires a login screen. I can't see any reason to use synchronous HTTP requests vs asynchronous because of that reason alone. The only time it makes sense is to have a worker thread make your synchronous request, and notify the main thread when the request is done. This will prevent the block. The question then is bench marking your code and seeing which has more overhead, a threaded synchronous request or an asynchronous request. The problem with asynchronous requests is you need to either setup a smart notification or delegate system as you can have multiple requests for multiple resources happening at any given time. The other problem with them is if I have a class, say a singleton which is handling all of my data, I can't use asynchronous requests in a getter method. Meaning the following won't go: - (NSArray *)users { if(users == nil) users = do_async_request // NO GOOD return users; } whereas the following: - (NSArray *)users { if(users == nil) users == do_sync_request // OK. return users; } You also might have priority. What I mean by priority is if you look at Apple's Mail application on the iPhone, you'll notice they first suck down your entire POP/IMAP tree before making a second request to retrieve the first 2 lines (the default) of your message. I suppose my question to you experts is this. When are you using asynchronous, synchronous, threads -- and when are you using either async/sync in a thread? What kind of delegation system do you have setup to know what to do when a async request completes? Are you prioritizing your async requests? There's a gamut of solutions to this all too common problem. It's simple to hack something out. The problem is, I don't want to hack and I want to have something that's simple and easy to maintain.

    Read the article

  • Debugging site written mainly in JScript with AJAX code injection

    - by blumidoo
    Hello, I have a legacy code to maintain and while trying to understand the logic behind the code, I have run into lots of annoying issues. The application is written mainly in Java Script, with extensive usage of jQuery + different plugins, especially Accordion. It creates a wizard-like flow, where client code for the next step is downloaded in the background by injecting a result of a remote AJAX request. It also uses callbacks a lot and pretty complicated "by convention" programming style (lots of events handlers are created on the fly based on certain object names - e.g. current page name, current step name). Adding to that, the code is very messy and there is no obvious inner structure - the functions are scattered in the code, file names do not reflect the business role of the code, lots of functions and code snippets are most likely not used at all etc. PROBLEM: How to approach this code base, so that the inner flow of the code can be sort-of "reverse engineered" using a suite of smart debugging tools. Ideally, I would like to be able to attach to the running application and step through the code, breaking on each new function call. Also, it would be nice to be able to create a "diagram of calls" in the application (i.e. in order to run a particular page logic, this particular flow of function calls was executed in a particular order). Not to mention to be able to run a coverage analysis, identifying potentially orphaned code fragments. I would like to stress out once more, that it is impossible to understand the inner logic of the application just by looking at the code itself, unless you have LOTS of spare time and beer crates, which I unfortunately do not have :/ (shame...) An IDE of some sort that would aid in extending that code would be also great, but I am currently looking into possibility to use Visual Studio 2010 to do the job, as the site itself is a mix of Classic ASP and ASP.NET (I'd say - 70% Java Script with jQuery, 30% ASP). I have obviously tried FireBug, but I was unable to find a way to define a breakpoint or step into the code, which is "injected" into the client JS using AJAX calls (i.e. the application retrieves the code by invoking an URL and injects it to the client local code). Venkman debugger had similar issues. Any hints would be welcome. Feel free to ask additional questions.

    Read the article

  • Simple database design and LINQ

    - by Anders Svensson
    I have very little experience designing databases, and now I want to create a very simple database that does the same thing I have previously had in xml. Here's the xml: <services> <service type="writing"> <small>125</small> <medium>100</medium> <large>60</large> <xlarge>30</xlarge> </service> <service type="analysis"> <small>56</small> <medium>104</medium> <large>200</large> <xlarge>250</xlarge> </service> </services> Now, I wanted to create the same thing in a SQL database, and started doing this ( hope this formats ok, but you'll get the gist, four columns and two rows): > ServiceType Small Medium Large > > Writing 125 100 60 > > Analysis 56 104 200 This didn't work too well, since I then wanted to use LINQ to select, say, the Large value for Writing (60). But I couldn't use LINQ for this (as far as I know) and use a variable for the size (see parameters in the method below). I could only do that if I had a column like "Size" where Small, Medium, and Large would be the values. But that doesn't feel right either, because then I would get several rows with ServiceType = Writing (3 in this case, one for each size), and the same for Analysis. And if I were to add more servicetypes I would have to do the same. Simply repetitive... Is there any smart way to do this using relationships or something? Using the second design above (although not good), I could use the following LINQ to select a value with parameters sent to the method: protected int GetHourRateDB(string serviceType, Size size) { CalculatorLinqDataContext context = new CalculatorLinqDataContext(); var data = (from calculatorData in context.CalculatorDatas where calculatorData.Service == serviceType && calculatorData.Size == size.ToString() select calculatorData).Single(); return data.Hours; } But if there is another better design, could you please also describe how to do the same selection using LINQ with that design? Please keep in mind that I am a rookie at database design, so please be as explicit and pedagogical as possible :-) Thanks! Anders

    Read the article

  • How hard programming is? Really. [closed]

    - by Bubba88
    Hi! The question is about your perception of programming activity. How hard/exacting this task is? There is much buzz about programming nowadays, people say that programmers are smart, very technical and abstract at a time, know much about world, psychology etc.. They say, that programmers got really powerful brain thing, cause there is much to keep in consideration simultaneously again with much information folded into each other associatively (up 10 levels of folding they say))) Still, there are some terms to specify at our own.. So that is the question: What do you think about programming in general? Is it hard? Is it 'for everyone' or for the particular kind of people only? How much non-CS background do you need to program (just to program, really; enterprise applications for example)? How long is the learning curve? (again, for programming in general) And another bunch of random questions: - If you were not to like/love programming, would that be a serious trouble bothering your current employment? - If you were to start from the beginning, would you chose that direction this time? - What other areas (jobs or maybe hobbies) are comparable to programming in the way they can explode someone's lovely brain? - Is 'non turing-complete programming' (SQL, XML, etc.) comparable to what we do or is it really way easier, less requiring, cheap and akin to cooking :)? Well, the essence is: How would you describe programming activity WRT to its difficulty? Or, on the other hand: Did you ever catch yourself thinking at some point: OMG, it's sooo hard! I don't know how would I ever program, even carried away this way and doing programming just for fun? It's very interesting to know your opinion, your'e the programmers after all. I mean much people must be exaggerating/speculating about the thing they do not really know about. But that musn't be the case here on SO :) P.S.: I'll try my best to update this post later, and you please edit it too. At least I'll get decent English in my question text :)

    Read the article

  • Store a long string into mult array

    - by QLiu
    Hello All, I have a long string arrays, which looks like that var callinfo_data=new Array( "1300 135 604#<b>Monday - Friday: 9:00 a.m. to 5:30 p.m. AEST</b>", //Australia .. "0844000040#<b>lunedì-venerdì ore 10:00 - 17:00 CET</b>", //Switzerland (it) "212 356 9707#<b>Hafta içi her gün: 10:00 - 18:00</b>", //Turkey "08451610009#<b>Monday - Friday: 9:00 a.m. to 6:30 p.m. GMT</b>", //UK "866 486 6866#<b>Monday - Friday: 7:00 a.m. to 11:00 p.m. EST</b><br />Saturday: 9:00 a.m. to 8:00 p.m. EST", //USA "+31208501004#<b>Monday - Friday: 9:00 a.m. to 7:30 p.m. GMT+1</b>", //other countries " # "); As you see, it contact Phone number and open time. I can use split to separet them into info=callinfo_data[n].split("#"); two sections, And then i can represent them in HTML like "<div id ='phoneNumber'>"+info[0]+"</div><div id='openTime'>"+info[1]+"</div>" But my display phone number function will read the cookie variables, and then select the right contact info to display. Like, phone=callinfo_data[2].split("#"); if (locale == 'UK') details = phone[0]+ build_dropdown(locale); else if (locale == 'fr') details = 'French Contact Details<br>'+build_dropdown(locale); else if (locale == 'be') details = 'Belgian Contact Details<br>'+ build_dropdown(locale); else details = 'Unknown Contact Detail'; writeContactInfo(details); My questions are how I can build a function to load phone number and time based on my cookie variables, UK in a smart way. I can hard code everything, but i think it is too silly. I have to write a long code like: phone1= allinfo_data[0].split("#"); phone1= allinfo_data[1].split("#"); ... etc Second questions, how can I load this long arrays into easy access multi arrays? Thank you Regards, Qing

    Read the article

  • Multiline editable textarea in SVG

    - by Timo
    I'm trying to implement multiline editable textfield in SVG. I have the following code in http://jsfiddle.net/ca4d3/ : <svg width="1000" height="1000" overflow="scroll"> <g transform="rotate(5)"> <rect width="300" height="400" fill="#22DD22" fill-opacity="0.5"/> </g> <foreignObject x="10" y="10" overflow="visible" width="10000" height="10000" requiredFeatures="http://www.w3.org/TR/SVG11/feature#Extensibility"> <p style="display:table-cell;padding:10px;border:1px solid red; background-color:white;opacity:0.5;font-family:Verdana; font-size:20px;white-space: pre; word-wrap: normal; overflow: visible; overflow-y: visible; overflow-x:visible;" contentEditable="true" xmlns="http://www.w3.org/1999/xhtml"> Write here some text. Be smart and select some word. If you wanna be really COOL, paste here something cool! </p> </foreignObject> </svg> In newest Chrome, Safari and Firefox the code works in some way, but in Opera and IE 9 not. The goal is that: 0) Works in newest Chrome, Safari, Firefox, Opera and IE and if ever possible in some pads. 1) White-spaces are preserved and text wraps only on newline char (works in Chrome, Safari and Firefox, but not in Opera and IE 9 *). 2) The textfield is editable (in the same reliable and stabile way as textareas and contenteditable p elements in html) and height and width is expanded to fit text (works in Chrome, Safari and Firefox, but not in Opera and IE 9 *). 3) Texfield can be transformed (rotated, skewed, translated) while maintaining text editability (Tested rotation, but not work in any browser *). EDIT: Foreignobject rotation works on Firefox 15.0.1, but not in Safari 5.1.7 (6534.57.2), Chrome 22.0.1229.79, Opera 12.02, IE 9. Tested on Mac OS X 10.6.8. 4) Textfield can be clipped and masked while not necessarily maintaining text editability (not yet tested). *) using above code These all can be achieved using Flash, but Flash has so severe problems that it is not suitable for my purposes (after every little change in code, all have to be compiled again using Flex, which is slow, font size has limits, tracking technique is pixeloriented, not relative to em size etc.) and there still are differences across platforms. And I want to give a try to SVG! GUESTION: Can I achieve my goals 0-4 with current SVG support in browsers? Is coming SVG 2.0 for some help in this case? EDIT: Changed display:table to display:table-cell (and added new jsfiddle), because display:table made the field to loses focus when pressed arrow-up on first text row.

    Read the article

  • How do you use technology to memorize set of terms?

    - by user49767
    Always there are few set of items needs to be memorized in short span of time. Here are my following cases. 1) My Job requires some set of items needs to be memorized. 2) I am a developer who has to learn 150+ tags within next 3 days. 3) Fix developer/support has to remember minimum of 125+ tags (set of possible values). 4) It is better if team's SQL developer knows all the table and columns in my database. 5) When guys join new department or job. Memorizing few related items will definitely gives some benefit. Most of the cases, I suggest people to understand the domain better and nothing wrong in using google (but remember correct search-word). But recently I came across a junior developer who took lot of effort in memorizing set of things (150+ table structures, fix protocol tags, almost 300+ configuration items from property file) and was very very successful in his job and was swift in responding for support queries. Needless to say he is smart worker too (not a dumb guy). When I try to recollect some of the successful employees I met, they were so good in remembering entire schema and they did in short span of time. But I don't argue that memorizing alone gives success, but it greatly helps when situation demands. Here my question is, I am not good at remembering things, but it shouldn't be lame excuse. Hence I am evaluating using technolgies better to memorize set of items. Not very much interested in memory techniques (mnemoninc, photography memory, etc..). Even I have recorded 100+ items and listen to that whenever I found free time, defintely there were some fruitful result. Now I need your suggestion about what are all the ways to exploit technology to memorize. There could be so many reason why guys remember a subject (passionate, essential, author, creator, responsbile). Not interested in dissecting why guys remeber. Rather much interested in using ways, and techniques (cheat sheet...) to remember a set of itmes. Note : I appreciate, encourage people who could rephrase my question better. Note : I have kept couple of cheat-sheet close to my monitor, honestly it did not help me :).

    Read the article

  • How and why do I set up a C# build machine?

    - by mmr
    Hi all, I'm working with a small (4 person) development team on a C# project. I've proposed setting up a build machine which will do nightly builds and tests of the project, because I understand that this is a Good Thing. Trouble is, we don't have a whole lot of budget here, so I have to justify the expense to the powers that be. So I want to know: What kind of tools/licenses will I need? Right now, we use Visual Studio and Smart Assembly to build, and Perforce for source control. Will I need something else, or is there an equivalent of a cron job for running automated scripts? What, exactly, will this get me, other than an indication of a broken build? Should I set up test projects in this solution (sln file) that will be run by these scripts, so I can have particular functions tested? We have, at the moment, two such tests, because we haven't had the time (or frankly, the experience) to make good unit tests. What kind of hardware will I need for this? Once a build has been finished and tested, is it a common practice to put that build up on an ftp site or have some other way for internal access? The idea is that this machine makes the build, and we all go to it, but can make debug builds if we have to. How often should we make this kind of build? How is space managed? If we make nightly builds, should we keep around all the old builds, or start to ditch them after about a week or so? Is there anything else I'm not seeing here? I realize that this is a very large topic, and I'm just starting out. I couldn't find a duplicate of this question here, and if there's a book out there I should just get, please let me know. EDIT: I finally got it to work! Hudson is completely fantastic, and FxCop is showing that some features we thought were implemented were actually incomplete. We also had to change the installer type from Old-And-Busted vdproj to New Hotness WiX. Basically, for those who are paying attention, if you can run your build from the command line, then you can put it into hudson. Making the build run from the command line via MSBuild is a useful exercise in itself, because it forces your tools to be current.

    Read the article

  • how to design this relation in a DB schema

    - by raticulin
    I have a table Car in my db, one of the columns is purchaseDate. I want to be able to tag every car with a number of Policies (limited to 10 policies). Each policy has a time to life (ttl, a duration of time, like '5 years', '10 months' etc), that is, for how long since the car's purchaseDate the policy can be applied. I need to perform the following actions: when inserting a Car, it will be set with a number of Policies (at least one is set) sometimes a Car will be updated to add/remove a Policy searches must be done taking into account date/policies, for example: 'select all cars that are not covered by any policy as of today' My current design is (pol0..pol9 are the policies): CREATE TABLE Car ( id int NOT NULL IDENTITY(1,1), purchaseDate datetime NOT NULL, //more stuff... pol0 smallint default NULL, pol1 smallint default NULL, pol2 smallint default NULL, pol3 smallint default NULL, pol4 smallint default NULL, pol5 smallint default NULL, pol6 smallint default NULL, pol7 smallint default NULL, pol8 smallint default NULL, pol9 smallint default NULL, PRIMARY KEY (id) ) CREATE TABLE Policy ( id smallint NOT NULL, name varchar(50) collate Latin1_General_BIN NOT NULL, ttl varchar(100) collate Latin1_General_BIN NOT NULL, PRIMARY KEY (id) ) The problem I am facing is that the sql to perform the query above is a nightmare to write. As I don't know in which column each policy can be, so I have to check all columns for every policy etc etc. So I am wondering wether it is worth changing this. My questions are: The smallint as Policy id was chosen instead of an 'int IDENTITY' in order to save some space as there are going to be millions of Car records. It just adds complexity when creating a Policy as we must handle the id etc. Was it worth doing this? I am thinking that maybe there is a much better design? Obviously we could move the policy/car relation to its own table CarPolicy, benefits would be: no limit on 10 policies per car adding/removing etc much easier when only the default policy is applied (when no others are applied one called Default policy is applied), we could signal that by not having any entry in CarPolicy, now this is just done inserting the Default policy id in one of the columns. The cons are that we would need to change the DB, ORM classes etc. What would you recommend? Maybe there is another smart way to implement this that we are not aware without using the CarPolicy table?

    Read the article

  • How do you unit test the real world?

    - by Kim Sun-wu
    I'm primarily a C++ coder, and thus far, have managed without really writing tests for all of my code. I've decided this is a Bad Idea(tm), after adding new features that subtly broke old features, or, depending on how you wish to look at it, introduced some new "features" of their own. But, unit testing seems to be an extremely brittle mechanism. You can test for something in "perfect" conditions, but you don't get to see how your code performs when stuff breaks. A for instance is a crawler, let's say it crawls a few specific sites, for data X. Do you simply save sample pages, test against those, and hope that the sites never change? This would work fine as regression tests, but, what sort of tests would you write to constantly check those sites live and let you know when the application isn't doing it's job because the site changed something, that now causes your application to crash? Wouldn't you want your test suite to monitor the intent of the code? The above example is a bit contrived, and something I haven't run into (in case you haven't guessed). Let me pick something I have, though. How do you test an application will do its job in the face of a degraded network stack? That is, say you have a moderate amount of packet loss, for one reason or the other, and you have a function DoSomethingOverTheNetwork() which is supposed to degrade gracefully when the stack isn't performing as it's supposed to; but does it? The developer tests it personally by purposely setting up a gateway that drops packets to simulate a bad network when he first writes it. A few months later, someone checks in some code that modifies something subtly, so the degradation isn't detected in time, or, the application doesn't even recognize the degradation, this is never caught, because you can't run real world tests like this using unit tests, can you? Further, how about file corruption? Let's say you're storing a list of servers in a file, and the checksum looks okay, but the data isn't really. You want the code to handle that, you write some code that you think does that. How do you test that it does exactly that for the life of the application? Can you? Hence, brittleness. Unit tests seem to test the code only in perfect conditions(and this is promoted, with mock objects and such), not what they'll face in the wild. Don't get me wrong, I think unit tests are great, but a test suite composed only of them seems to be a smart way to introduce subtle bugs in your code while feeling overconfident about it's reliability. How do I address the above situations? If unit tests aren't the answer, what is? Thanks!

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >