Search Results

Search found 322 results on 13 pages for 'headache'.

Page 12/13 | < Previous Page | 8 9 10 11 12 13  | Next Page >

  • How To Get Web Site Thumbnail Image In ASP.NET

    - by SAMIR BHOGAYTA
    Overview One very common requirement of many web applications is to display a thumbnail image of a web site. A typical example is to provide a link to a dynamic website displaying its current thumbnail image, or displaying images of websites with their links as a result of search (I love to see it on Google). Microsoft .NET Framework 2.0 makes it quite easier to do it in a ASP.NET application. Background In order to generate image of a web page, first we need to load the web page to get their html code, and then this html needs to be rendered in a web browser. After that, a screen shot can be taken easily. I think there is no easier way to do this. Before .NET framework 2.0 it was quite difficult to use a web browser in C# or VB.NET because we either have to use COM+ interoperability or third party controls which becomes headache later. WebBrowser control in .NET framework 2.0 In .NET framework 2.0 we have a new Windows Forms WebBrowser control which is a wrapper around old shwdoc.dll. All you really need to do is to drop a WebBrowser control from your Toolbox on your form in .NET framework 2.0. If you have not used WebBrowser control yet, it's quite easy to use and very consistent with other Windows Forms controls. Some important methods of WebBrowser control are. public bool GoBack(); public bool GoForward(); public void GoHome(); public void GoSearch(); public void Navigate(Uri url); public void DrawToBitmap(Bitmap bitmap, Rectangle targetBounds); These methods are self explanatory with their names like Navigate function which redirects browser to provided URL. It also has a number of useful overloads. The DrawToBitmap (inherited from Control) draws the current image of WebBrowser to the provided bitmap. Using WebBrowser control in ASP.NET 2.0 The Solution Let's start to implement the solution which we discussed above. First we will define a static method to get the web site thumbnail image. public static Bitmap GetWebSiteThumbnail(string Url, int BrowserWidth, int BrowserHeight, int ThumbnailWidth, int ThumbnailHeight) { WebsiteThumbnailImage thumbnailGenerator = new WebsiteThumbnailImage(Url, BrowserWidth, BrowserHeight, ThumbnailWidth, ThumbnailHeight); return thumbnailGenerator.GenerateWebSiteThumbnailImage(); } The WebsiteThumbnailImage class will have a public method named GenerateWebSiteThumbnailImage which will generate the website thumbnail image in a separate STA thread and wait for the thread to exit. In this case, I decided to Join method of Thread class to block the initial calling thread until the bitmap is actually available, and then return the generated web site thumbnail. public Bitmap GenerateWebSiteThumbnailImage() { Thread m_thread = new Thread(new ThreadStart(_GenerateWebSiteThumbnailImage)); m_thread.SetApartmentState(ApartmentState.STA); m_thread.Start(); m_thread.Join(); return m_Bitmap; } The _GenerateWebSiteThumbnailImage will create a WebBrowser control object and navigate to the provided Url. We also register for the DocumentCompleted event of the web browser control to take screen shot of the web page. To pass the flow to the other controls we need to perform a method call to Application.DoEvents(); and wait for the completion of the navigation until the browser state changes to Complete in a loop. private void _GenerateWebSiteThumbnailImage() { WebBrowser m_WebBrowser = new WebBrowser(); m_WebBrowser.ScrollBarsEnabled = false; m_WebBrowser.Navigate(m_Url); m_WebBrowser.DocumentCompleted += new WebBrowserDocument CompletedEventHandler(WebBrowser_DocumentCompleted); while (m_WebBrowser.ReadyState != WebBrowserReadyState.Complete) Application.DoEvents(); m_WebBrowser.Dispose(); } The DocumentCompleted event will be fired when the navigation is completed and the browser is ready for screen shot. We will get screen shot using DrawToBitmap method as described previously which will return the bitmap of the web browser. Then the thumbnail image is generated using GetThumbnailImage method of Bitmap class passing it the required thumbnail image width and height. private void WebBrowser_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { WebBrowser m_WebBrowser = (WebBrowser)sender; m_WebBrowser.ClientSize = new Size(this.m_BrowserWidth, this.m_BrowserHeight); m_WebBrowser.ScrollBarsEnabled = false; m_Bitmap = new Bitmap(m_WebBrowser.Bounds.Width, m_WebBrowser.Bounds.Height); m_WebBrowser.BringToFront(); m_WebBrowser.DrawToBitmap(m_Bitmap, m_WebBrowser.Bounds); m_Bitmap = (Bitmap)m_Bitmap.GetThumbnailImage(m_ThumbnailWidth, m_ThumbnailHeight, null, IntPtr.Zero); } One more example here : http://www.codeproject.com/KB/aspnet/Website_URL_Screenshot.aspx

    Read the article

  • State of the (Commerce) Union: What the healthcare.gov hiccups teach us about the commerce customer experience

    - by Katrina Gosek
    Guest Post by Brenna Johnson, Oracle Commerce Product A lot has been said about the healthcare.gov debacle in the last week. Regardless of your feelings about the Affordable Care Act, there’s a hidden issue in this story that most of the American people don’t understand: delivering a great commerce customer experience (CX) is hard. It shouldn’t be, but it is. The reality of the government’s issues getting the healthcare site up and running smooth is something we in the online commerce community know too well.  If there’s one thing the botched launch of the site has taught us, it’s that regardless of the size of your budget or the power of an executive with a high-profile project, some of the biggest initiatives with the most attention (and the most at stake) don’t go as planned. It may even give you a moment of solace – we have the same issues! But why?  Organizations engage too many separate vendors with different technologies, running sections or pieces of a site to get live. When things go wrong, it takes time to identify the problem – and who or what is at the center of it. Unfortunately, this is a brittle way of setting up a site, making it susceptible to breaks, bugs, and scaling issues. But, it’s the reality of running a site with legacy technology constraints in today’s demanding, customer-centric market. This approach also means there’s also a lot of cooks in lots of different kitchens. You’ve got development and IT, the business and the marketing team, an external Systems Integrator to bring it all together, a digital agency or consultant, QA, product experts, 3rd party suppliers, and the list goes on. To complicate things, different business units are held responsible for different pieces of the site and managing different technologies. And again – due to legacy organizational structure and processes, this is all accepted as the normal State of the Union. Digital commerce has been commonplace for 15 years. Yet, getting a site live, maintained and performing requires orchestrating a cast of thousands (or at least, dozens), big dollars, and some finger-crossing. But it shouldn’t. The great thing about the advent of mobile commerce and the continued maturity of online commerce is that it’s forced organizations to think from the outside, in. Consumers – whether they’re shopping for shoes or a new healthcare plan – don’t care about what technology issues or processes you have behind the scenes. They just want it to work.  They want their experience to be easy, fast, and tailored to them and their needs – whatever they are. This doesn’t sound like a tall order to the American consumer – especially since they interact with sites that do work smoothly.  But the reality is that it takes scores of people, teams, check-ins, late nights, testing, and some good luck to get sites to run, and even more so at Black Friday (or October 1st) traffic levels.  The last thing on a customer’s mind is making excuses for why they can’t buy a product – just get it to work. So what is the government doing? My guess is working day and night to get the site performing  - and having to throw big money at the problem. In the meantime they’re sending frustrated online users to the call center, or even a location where a trained “navigator” can help them in-person to complete their selection. Sounds a lot like multichannel commerce (where broken communication between siloed touchpoints will only frustrate the consumer more). One thing we’ve learned is that consumers spend their time and money with brands they know and trust. When sites are easy to use and adapt to their needs, they tend to spend more, come back, and even become long-time loyalists. Achieving this may require moving internal mountains, but there’s too much at stake to ignore the sea change in how organizations are thinking about their customer. If the thought of re-thinking your internal teams, technologies, and processes sounds like a headache, think about the pain associated with losing valuable customers – and dollars. Regardless if you’re in B2B or B2C, it’s guaranteed that your competitors are making CX a priority. Those early to the game who have made CX a priority have already begun to outpace their competition. So as you’re planning for 2014, look to the news this week. Make sure the customer experience is a focus at your organization. Expectations are at record highs. Map your customer’s journey, and think from the outside, in. How easy is it for your customers to do business with you? If they interact with many touchpoints across your organization, are the call center, website, mobile environment, or brick and mortar location in sync? Do you have the technology in place to achieve this? It’s time to give the people what they want!

    Read the article

  • Access Violation Using memcpy or Assignment to an Array in a Struct

    - by Synetech inc.
    Hi, I wrote a program last night that worked just fine but when I refactored it today to make it more extensible, I ended up with a problem. The original version had a hard-coded array of bytes. After some processing, some bytes were written into the array and then some more processing was done. To avoid hard-coding the pattern, I put the array in a structure so that I could add some related data and create an array of them. However now, I cannot write to the array in the structure. Here’s a pseudo-code example: main() { char pattern[]="\x32\x33\x12\x13\xba\xbb"; PrintData(pattern); pattern[2]='\x65'; PrintData(pattern); } That one works but this one does not: struct ENTRY { char* pattern; int somenum; }; main() { ENTRY Entries[] = { {"\x32\x33\x12\x13\xba\xbb\x9a\xbc", 44} , {"\x12\x34\x56\x78", 555} }; PrintData(Entries[0].pattern); Entries[0].pattern[2]='\x65'; //0xC0000005 exception!!! :( PrintData(Entries[0].pattern); } The second version causes an access violation exception on the assignment. I’m sure it’s because the second version allocates memory differently, but I’m starting to get a headache trying to figure out what’s what or how to get fix this. (I’m currently working around it by dynamically allocating a buffer of the same size as the pattern array, copying the pattern to the new buffer, making the changes to the buffer, using the buffer in the place of the pattern array, and then trying to remember to free the—temporary—buffer.) (Specifically, the original version cast the pattern array—+offset—to a DWORD* and assigned a DWORD constant to it to overwrite the four target bytes. The new version cannot do that since the length of the source is unknown—may not be four bytes—so it uses memcpy instead. I’ve checked and re-checked and have made sure that the pointers to memcpy are correct, but I still get an access violation. I use memcpy instead of str(n)cpy because I am using plain chars (as an array of bytes), not Unicode chars and ignoring the null-terminator. Using an assignment as above causes the same problem.) Any ideas? Thanks a lot.

    Read the article

  • IE6 and fieldset background color?

    - by codemonkey613
    Hey, I'm having some difficulty with CSS and IE6 compatibility. URL: http://bit.ly/dlX7cS Problem #1: I put a background image on the fieldset around Canada and United States. In IE6 and IE7, the background bleeds above the border-top of the fieldset. So, I found a fix. It is applied only to IE browsers, and moves the legend up a few pixels, aligning the background correctly. <!-- Fix: IE6/IE7, Legends --> <!--[if lte IE 7]> <style type="text/css"> fieldset { position: relative; } fieldset legend { position: absolute; top: -0.5em; left: 0; } </style> <![endif]--> This fixes IE7. But in IE6, it seems to make my legend for Canada vanish completely. Does anyone have a copy of IE6 they can open my site and tell me if you see Canada label. (I am testing with a multi-IE program, and it keeps crashing. My copy might not be accurate). If it's not there, any suggestions on how to fix it? Also, any suggestion on where I can download working copy of IE6? Problem #2: I have a Google Map embedded using iframe. The width of that iframe is 515px. In Firefox, Chrome, IE7 -- that is the correct alignment. But in IE6, it gets <br/> underneath the Just Energy paragraph beside it. It doesn't fit. I have to change width to 513px for it to fit. Uhm, anyone know where those 2px of difference happen? I removed border, padding, margin from the iframe, but still something is happening. <!-- Google Maps --> <iframe class="gmap" src="http://maps.google.com/maps/ms?hl=en&amp;ie=UTF8&amp;msa=0&amp;msid=100146512697135839835.000481e2a2779e8865863&amp;ll=42,-100&amp;spn=20,80&amp;output=embed" frameborder="0" marginheight="0" marginwidth="0" scrolling="no"></iframe> <!-- / Google Maps --> Er, big headache. lol

    Read the article

  • Is MVVM pointless?

    - by joebeazelman
    Is orthodox MVVM implementation pointless? I am creating a new application and I considered Windows Forms and WPF. I chose WPF because it's future-proof and offer lots of flexibility. There is less code and easier to make significant changes to your UI using XAML. Since the choice for WPF is obvious, I figured that I may as well go all the way by using MVVM as my application architecture since it offers blendability, separation concerns and unit testability. Theoretically, it seems beautiful like the holy grail of UI programming. This brief adventure; however, has turned into a real headache. As expected in practice, I’m finding that I’ve traded one problem for another. I tend to be an obsessive programmer in that I want to do things the right way so that I can get the right results and possibly become a better programmer. The MVVM pattern just flunked my test on productivity and has just turned into a big yucky hack! The clear case in point is adding support for a Modal dialog box. The correct way is to put up a dialog box and tie it to a view model. Getting this to work is difficult. In order to benefit from the MVVM pattern, you have to distribute code in several places throughout the layers of your application. You also have to use esoteric programming constructs like templates and lamba expressions. Stuff that makes you stare at the screen scratching your head. This makes maintenance and debugging a nightmare waiting to happen as I recently discovered. I had an about box working fine until I got an exception the second time I invoked it, saying that it couldn’t show the dialog box again once it is closed. I had to add an event handler for the close functionality to the dialog window, another one in the IDialogView implementation of it and finally another in the IDialogViewModel. I thought MVVM would save us from such extravagant hackery! There are several folks out there with competing solutions to this problem and they are all hacks and don’t provide a clean, easily reusable, elegant solution. Most of the MVVM toolkits gloss over dialogs and when they do address them, they are just alert boxes that don’t require custom interfaces or view models. I’m planning on giving up on the MVVM view pattern, at least its orthodox implementation of it. What do you think? Has it been worth the trouble for you if you had any? Am I just a incompetent programmer or does MVVM not what it's hyped up to be?

    Read the article

  • Good Email Notification Sending Service

    - by Philibert Perusse
    I need to send a few but important email notifications to individual users. For instance, when they register their software I send them a confirmation email. Right now, I am using 'sendmail' from my Perl CGI script to do the job. Most of my automated email are lost or marked as junk. Unfortunately, I am using shared hosting services and not a very good control over the SPF and SenderID DNS records. Even more bad, some other user of that shared server has been infected with some kind of SPAM-BOT and the IP is now blacklisted until further notice! Anyway I just don't want to deal with this kind of headache. I am looking for an online service that I will be able to subscribe to and pay something like 0.10$ per email I send with no monthly fees. I just need and API to be able to send the email from PHP or Perl code I will have to write. I have been looking around at all those "Email Sending Services" and they are all wrapped around creating campains and managing lists for bulk email marketing distribution and newsletters. But remember, I want to send an email notification to a "single" recipient. So far, I have look at MailChimp, SocketLabs, iContact, ConstantContact, StreamSend and so many others to no avail. I have seen one comment at Hackers News saying that MailChimp have an API for transactional e-mails (i.e. ad-hoc ones to welcome a user for example). So you're not just restricted to using them for bulk emails But I cannot find this in the API documentation supplied, maybe this was removed. Any suggestions out there. Here is a summary of my requirements: Allows ad hoc sending of email to a single recipient. Throughput may well be throttle I don't care, i am sending like 2-5 emails a day. API available in PHP or Perl to connect to that web service. Ideally I can send HTML formatted emails, otherwise I will live with text only. Solution not too expensive, between 0.01$ and 0.25$ per email would be acceptable. No recurring monthly fees.

    Read the article

  • JSON Feed Appears to be XHR when it should be JS

    - by Oscar Godson
    I don't get why it'd doing this with the 2nd feed (appearing as a XHR call rather than just JS [looking at it in Firefox/Firebug]). The 2nd feed has the exact same MIME type as Flickr's JSON feed, yet the PortlandOregon.gov one shows as XHR and i get a NULL callback when using $.getJSON and if i use $.ajax with a 'json' or 'jsonp' type i get nothing at all. If i do the Flickr one i get the normal "[object Object]" callback. Whats going on? Please help! This has been such a headache for about a week. And i have authorization to change the feed, but i have to request the change, so if anyone knows for absolute sure let me know that! Response Headers from Flickr's API ( http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=? ) [JS]: Date Mon, 15 Mar 2010 21:56:06 GMT P3P policyref="http://p3p.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV" Expires Mon, 26 Jul 1997 05:00:00 GMT Last-Modified Mon, 15 Mar 2010 21:52:17 GMT Cache-Control no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma no-cache Vary Accept-Encoding Content-Encoding gzip Content-Length 3647 Connection close Content-Type application/x-javascript; charset=utf-8 Request Headers Host api.flickr.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://oscargodson.com/dev/addWidget/test.html Cookie BX=4lflj455amesp&b=3&s=iv; fltoto=0%2C0%2C0%2C0%2C1%2C0%3B0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%2C0%3B1%3B0%3B; search_z=t; localization=en-us%3Bus%3Bus PortlandOregon.gov ( http://www.portlandonline.com/shared/cfm/json.cfm?c=27321 ) [XHR]: Response Headers Connection close Date Mon, 15 Mar 2010 21:57:49 GMT Server Microsoft-IIS/6.0 Set-Cookie CONTACT_ID=0;path=/ LAST_USER=;path=/ BIGipServercgis_pol_web_pool-http=1191537418.20480.0000; path=/ Content-Type application/x-javascript; charset=utf-8 Request Headers Host www.portlandonline.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://oscargodson.com/dev/addWidget/test.html Origin http://oscargodson.com

    Read the article

  • <plugin> not a function

    - by bah
    Hi, I've been trying to solve this mystery almost 2 hours, this is giving me a headache. I tried 2 plug-ins already and I'm always getting "* is not a function". My code is exactly like examples so I don't know why it's not working. <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <title>asd</title> <script type="text/javascript" src="jquery.js"></script> <script type='text/javascript' src='serial/jquery.scrollTo'></script> <script type='text/javascript' src='serial/jquery.serialScroll'></script> <script type="text/javascript"> $(document).ready(function(){ $('#slider').serialScroll({ items:'li', offset:-230, //when scrolling to photo, stop 230 before reaching it (from the left) start:1, //as we are centering it, start at the 2nd duration:1200, force:true, stop:true, lock:false, cycle:false, //don't pull back once you reach the end easing:'easeOutQuart', //use this easing equation for a funny effect jump: true //click on the images to scroll to them }); }); </script> </head> <body> <div id="slider"> <ul> <li><img width="500" height="500" src="dummy/dummy.jpg" alt="Css Template Preview" /></li> <li><img width="500" height="500" src="dummy/dummy1.jpg" alt="Css Template Preview" /></li> <li><img width="500" height="500" src="dummy/dummy2.jpg" alt="Css Template Preview" /></li> <li><img width="500" height="500" src="dummy/dummy3.jpg" alt="Css Template Preview" /></li> </ul> </div> </body> </html> I must be missing something essential there because I see nothing what's wrong. I'm using jQuery 1.4.2. and there are plug-ins I've tried - Easy Slider, jQuery serial scroll

    Read the article

  • XML Return from an Oracle Stored Procedure

    - by Tequila Jinx
    Unfortunately most of my DB experience has been with MSSQL which tends to hold your hand a lot more than Oracle. What I'm trying to do is fairly trivial in tSQL, however, pl/sql is giving me a headache. I have the following procedure: CREATE OR REPLACE PROCEDURE USPX_GetUserbyID (USERID USERS.USERID%TYPE, USERRECORD OUT XMLTYPE) AS BEGIN SELECT XMLELEMENT("user" , XMLATTRIBUTES(u.USERID AS "userid", u.companyid as "companyid", u.usertype as "usertype", u.status as "status", u.personid as "personid") , XMLFOREST( p.FIRSTNAME AS "firstname" , p.LASTNAME AS "lastname" , p.EMAIL AS "email" , p.PHONE AS "phone" , p.PHONEEXTENSION AS "extension") , XMLELEMENT("roles", (SELECT XMLAGG(XMLELEMENT("role", r.ROLETYPE)) FROM USER_ROLES r WHERE r.USERID = USERID AND r.ISACTIVE = 1 ) ) , XMLELEMENT("watches", (SELECT XMLAGG( XMLELEMENT("watch", XMLATTRIBUTES(w.WATCHID AS "id", w.TICKETID AS "ticket") ) ) FROM USER_WATCHES w WHERE w.USERID = USERID AND w.ISACTIVE = 1 ) ) ) AS "RESULT" INTO USERRECORD FROM USERS u LEFT JOIN PEOPLE p ON p.PERSONID = u.PERSONID WHERE u.USERID = USERID; END USPX_GetUserbyID; When executed, it should return an XML document with the following structure: <user userid="" companyid="" usertype="" status="" personid=""> <firstname /> <lastname /> <email /> <phone /> <extension /> <roles> <role /> </roles> <watches> <watch id="" ticket="" /> </watches> </user> When I execute the query itself, replacing the USERID parameter with a string and removing the "into" clause, the query runs fine and returns the expected structure. However, when the procedure attempts to execute the query, passing the results of the XMLELEMENT function into the USERRECORD output parameter, I get the following exception: Error report: ORA-01422: exact fetch returns more than requested number of rows ORA-06512: at "USPX_GETUSERBYID", line 4 ORA-06512: at line 3 01422. 00000 - "exact fetch returns more than requested number of rows" *Cause: The number specified in exact fetch is less than the rows returned. *Action: Rewrite the query or change number of rows requested I'm baffled trying to nail this down, and unfortunately my google-fu hasn't helped. I've found plenty of Oracle SQL|XML examples, but none that deal with XML returns from a procedure. Note: I know that an alternate method of retrieving XML using DBMS methods exists, however, it's my understanding that that functionality is deprecated in favor of SQL|XML.

    Read the article

  • Help choosing authentication method

    - by Dima
    I need to choose an authentication method for an application installed and integrated in customers environment. There are two types of environments - windows and linux/unix. Application is user based, no web stuff, pure Java. The requirement is to authenticate users which will use my application against customer provided user base. Meaning, customer installs my app, but uses his own users to grant or deny access to my app. Typical, right? I have three options to consider and I need to pick up the one which would be a) the most flexible to cover most common modern environments and b) would take least effort while stay robust and standard. Option (1) - Authenticate locally managing user credentials in some local storage, e.g. file. Customer would then add his users to my application and it will then check the passwords. Simple, clumsy but would work. Customers would have to punch every user they want to grant access to my app using some UI we will have to provide. Lots of work for me, headache to the customer. Option (2) - Use LDAP authentication. Customers would tell my app where to look for users and I will walk their directory resolving names into user names and trying to bind with found password. This is better approach IMO, but more fragile because I will have to walk an unknown directory structure and who knows if this will be permitted everywhere. Would be harder to test since there are many LDAP implementation out there, last thing I want is drowning in this voodoo. Option(3) - Use plain Kerberos authentication. Customers would tell my app what realm (domain) and which KDC (key distribution center) to use. In ideal world these two parameters would be all I need to set while customers could use their own administration tools to configure domain and kdc. My application would simply delegate user credentials to this third party (using JAAS or Spring security) and consider success when third party is happy with them. I personally prefer #3, but not sure what surprises I might face. Would this cover windows and *nix systems entirely? Is there another option to consider?

    Read the article

  • Drupal 6: Printing Unadulterated Primary Links and all children...

    - by dcolumbus
    How in the WORLD is possible? I swear, I've read the equivalent of 3 encyclopedias to no avail. I've tried solutions within regions, page.tpl.php and blocks. None of them give me what I need... and I know there are so many other people that need this too! I've come to the conclusion that I want to print out the menu within my page.tpl.php ... so no block solutions, please. I want to be able to loop through the primary menu links (AND children) and rewrite the output so that there's no default Drupal class tagging. The closest I've found is this example: <?php if (is_array($primary_links)) : ?> <ul id="sliding-navigation"> <?php foreach ($primary_links as $link): ?> <li class="sliding-element"><?php $href = $link['href'] == "<front>" ? base_path() : base_path() . drupal_get_path_alias($link['href']); print "<a href='" . $href . "'>" . $link['title'] . "</a>"; ?></li> <?php endforeach; ?> </ul> <?php endif; ?> As you can see, links are being reprinted with a custom UL and LI class ... that's GREAT! However, no children are being printed. How would I extend this code so that all children are a part of the list? NOTE: I don't want the children to only appear on their parent page, they must be present all the time. Otherwise, the drop-down menu I have planned is useless. I sincerely thank you in advance to lessening my gargantuan headache!

    Read the article

  • DCVS + hosting for a startup commercial multiplatform phone app

    - by AG
    I'm in lean startup mode, working on a simple phone app that will be published initially as a iThingy app and an Android app with, possibly, Blackberry and Symbian versions to follow. I'm about to go from no repository to needing a central repository that up to 4 very part-time resources will be sharing. Two of us have no version control background, one has used Subversion, and I've used most of the major centralized VCS systems. I'm not going to be pushing the technical limitations of any VCS for a long time; I'm sure that any of the major systems would work fine. And the hosting accounts I've looked at seem reasonable. So I'm really focussed on minimizing the downside risks. That is, I'd like to find a stable setup that is easy to learn in general, easy to use from Windows/Eclipse, and won't paint me into any obvious corners for the next 12 months or so. A quick search of the web had led me to consider the following pairs of DVCS and hosting service, with what I think I'm hearing as their strengths and weaknesses (for my purposes): Bazaar/Launchpad -- My initial choice since I need to get more familiar with this pair for the Google Summer of Code mentoring I'm doing. But, whatever the technical merits, a non-starter for me because they are purely open source, no private repositories plans to purchase that I can see. Git/GitHub -- Git: Fast, light, ultimately flexible, but relatively less Windows friendly, Eclipse plugin (eGit) available but relatively young, GitHub: widely used, pricing is fine Mercurial/BitBucket -- Mercurial: a little less flexible, a little more Windows friendly, Eclipse plugin seems a bit more mature, BitBucket: widely used, pricing is fine, includes a wiki and an issue tracker that we might be able to use instead of something like BaseCamp, at least for a while. Mercurial/BitBucket seem like the winning pair so far for my particular situation; at least two of us are definitely going to be working mostly from Eclipse on Windows and reducing my own learning curve is a priority. ;-) But I have two specific questions: 1) Am I wrong about Bazaar/Launchpad and is there a viable, secure way to use them for proprietary code? 2) Any reason to think that the Mecurial/Bitbucket pair will end up being a headache for my Mac developer, soon, or for Blackberry or Symbian developers a little later? ag

    Read the article

  • Help with javascript form validation

    - by zac
    I am getting a headache with form validation and hoping that the kind folks here can give me a hand finishing this sucker up I have it basically working except the email validation is very simplistic (only alerts if it is blank but does not actually check it if is a valid email address) and I am relying on ugly alerts but would like to have it reveal a hidden error div instead of the alert. I have this all wrapped up with an age validation check too.. here are the important bits, minus the cookie scripts function checkAge() { valid = true; if ( document.emailForm.email.value== 0 ) { alert ( "Please enter your email." ); valid = false; } if ( document.emailForm.year.selectedIndex == 0 ) { alert ( "Please select your Age." ); valid = false; } var min_age = 13; var year = parseInt(document.forms["emailForm"]["year"].value); var month = parseInt(document.forms["emailForm"]["month"].value) - 1; var day = parseInt(document.forms["emailForm"]["day"].value); var theirDate = new Date((year + min_age), month, day); var today = new Date; if ( (today.getTime() - theirDate.getTime()) < 0) { var el = document.getElementById('emailBox'); if(el){ el.className += el.className ? ' youngOne' : 'youngOne'; } document.getElementById('emailBox').innerHTML = "<img src=\"emailSorry.gif\">" createCookie('age','not13',0) return false; } else { //this part doesnt work either document.getElementById('emailBox').innerHTML = "<img src=\"Success.gif\">" createCookie('age','over13',0) return valid; }; }; var x = readCookie('age'); window.onload=function(){ if (x=='null') { }; if (x=='over13') { }; if (x=='not13') { document.getElementById('emailBox').innerHTML = "<img src=\"emailSorry.gif\">"; }; } can someone please help me figure a better email validation for this bit: if ( document.emailForm.email.value== 0 ) { alert ( "Please enter your email." ); valid = false; } and how would I replace the alert with something that changes a class from hidden to visible? Something like? document.getElementById('emailError').style.display='block'

    Read the article

  • tablednd post issue help please

    - by netrise
    Hi plz i got a terrible headache my script is very simple Why i can’t get $_POST['table-2'] after submiting update button, i want to get ID numbers sorted # index.php <head> <script src="jquery.js" type="text/javascript"></script><br /> <script src="jquery.tablednd.js" type="text/javascript"></script><br /> <script src="jqueryTableDnDArticle.js" type="text/javascript"></script><br /> </head> <body> <form method='POST' action=index.php> <table id="table-2" cellspacing="0" cellpadding="2"> <tr id="a"><td>1</td><td>One</td><td><input type="text" name="one" value="one"/></td></tr> <tr id="b"><td>2</td><td>Two</td><td><input type="text" name="two" value="two"/></td></tr> <tr id="c"><td>3</td><td>Three</td><td><input type="text" name="three" value="three"/></td></tr> <tr id="d"><td>4</td><td>Four</td><td><input type="text" name="four" value="four"/></td></tr> <tr id="e"><td>5</td><td>Five</td><td><input type="text" name="five" value="five"/></td></tr> </table> <input type="submit" name="update" value="Update"> </form> <?php $result[] = $_POST['table-2']; foreach($result as $value) { echo "$value<br/>"; } ?> </body> # jqueryTableDnDArticle.js …………. $(“#table-2?).tableDnD({ onDragClass: “myDragClass”, onDrop: function(table, row) { var rows = table.tBodies[0].rows; var debugStr = “Row dropped was “+row.id+”. New order: “; for (var i=0; i<rows.length; i++) { debugStr += rows[i].id+" "; } //$("#debugArea").html(debugStr); $.ajax({ type: "POST", url: "index.php", data: $.tableDnD.serialize(), success: function(html){ alert("Success"); } }); }, onDragStart: function(table, row) { $("#debugArea").html("Started dragging row "+row.id); } });

    Read the article

  • Using Selenium-IDE with a rich Javascript application?

    - by Darien
    Problem At my workplace, we're trying to find the best way to create automated-tests for an almost wholly javascript-driven intranet application. Right now we're stuck trying to find a good tradeoff between: Application code in reusable and nest-able GUI components. Tests which are easily created by the testing team Tests which can be recorded once and then automated Tests which do not break after small cosmetic changes to the site XPath expressions (or other possible expressions, like jQuery selectors) naively generated from Selenium-IDE are often non-repeatable and very fragile. Conversely, having the JS code generate special unique ID values for every important DOM-element on the page... well, that is its own headache, complicated by re-usable GUI components and IDs needing to be consistent when the test is re-run. What successes have other people had with this kind of thing? How do you do automated application-level testing of a rich JS interface? Limitations We are using JavascriptMVC 2.0, hopefully 3.0 soon so that we can upgrade to jQuery 1.4.x. The test-making folks are mostly trained to use Selenium IDE to directly record things. The test leads would prefer a page-unique HTML ID on each clickable element on the page... Training the testers to write or alter special expressions (such as telling them which HTML class-names are important branching points) is a no-go. We try to make re-usable javascript components, but this means very few GUI components can treat themselves (or what they contain) as unique. Some of our components already use HTML ID values in their operation. I'd like to avoid doing this anyway, but it complicates the idea of ID-based testing. It may be possible to add custom facilities (like a locator-builder or new locator method) to the Selenium-IDE installation testers use. Almost everything that goes on occurs within a single "page load" from a conventional browser perspective, even when items are saved Current thoughts I'm considering a system where a custom locator-builder (javascript code) for Selenium-IDE will talk with our application code as the tester is recording. In this way, our application becomes partially responsible for generating a mostly-flexible expression (XPath or jQuery) for any given DOM element. While this can avoid requiring more training for testers, I worry it may be over-thinking things.

    Read the article

  • posting php code using jquery .html()

    - by Emmanuel Imwene
    simple query,but it's giving me a headache, i need a division to be updated with a changed session variable each time a user clicks on a name,i figured i'd use .html() using jquery to update the division, i don't know if you can do this, but here goes: $("#inner").html('<?php session_start(); if(file_exists($_SESSION['full'])||file_exists($_SESSION['str'])){ if(file_exists($_SESSION['full'])) { $full=$_SESSION['full']; $handlle = fopen($full, "r"); $contents = fread($handlle, filesize($full)); fclose($handlle); echo $contents; echo '<script type="text/javascript" src="jquery-1.8.0.min (1).js">'; echo '</script>'; echo '<script type="text/javascript">'; echo 'function loadLog(){ var oldscrollHeight = $("#inner").attr("scrollHeight") - 20; $.ajax({ url: \''.$_SESSION['full'].'\', cache: false, success: function(html){ $("#inner").html(html); //Insert chat log into the #chatbox div var newscrollHeight = $("#inner").attr("scrollHeight") - 20; if(newscrollHeight > oldscrollHeight){ $("#inner").animate({ scrollTop: newscrollHeight }, \'normal\'); //Autoscroll to bottom of div } }, }); } setInterval (loadLog, 2500);'; echo '</script>'; } else { $str=$_SESSION['str']; if(file_exists($str)) { $handle = fopen($str, 'r'); $contents = fread($handle, filesize($str)); fclose($handle); echo $contents; $full=$_SESSION['full']; $handlle = fopen($full, "r"); $contents = fread($handlle, filesize($full)); fclose($handlle); echo $contents; echo '<script type="text/javascript" src="jquery-1.8.0.min (1).js">'; echo '</script>'; echo '<script type="text/javascript">'; echo 'function loadLog(){ var oldscrollHeight = $("#inner").attr("scrollHeight") - 20; $.ajax({ url: \''.$_SESSION['str'].'\', cache: false, success: function(html){ $("#inner").html(html); //Insert chat log into the #chatbox div var newscrollHeight = $("#inner").attr("scrollHeight") - 20; if(newscrollHeight > oldscrollHeight){ $("#inner").animate({ scrollTop: newscrollHeight }, \'normal\'); //Autoscroll to bottom of div } }, }); } setInterval (loadLog, 2500);'; echo '</script>'; } } } ?>'); is that legal, if not, how would i accomplish this?

    Read the article

  • What is wrong with this attempt of sending a break-signal?

    - by Jook
    I have quite a headache about this seemingly easy task: send a break signal to my device, like the wxTerm (or any similar Terminal application) does. This signal has to be 125ms long, according to my tests and the devices specification. It should result in a specific response, but what I get is a longer response than expected, and the transmitted date is false. e.g.: what it should respond 08 00 81 00 00 01 07 00 what it does respond 08 01 0A 0C 10 40 40 07 00 7F What really boggles me is, that after I have used wxTerm to look at my available com-ports (without connecting or sending anything), my code starts to work! I can send then as many breaks as I like, I get my response right from then on. I have to reset my PC in order to try it again. What the heck is going on here?! Here is my code for a reset through a break-signal: minicom_client(boost::asio::io_service& io_service, unsigned int baud, const string& device) : active_(true), io_service_(io_service), serialPort(io_service, device) { if (!serialPort.is_open()) { cerr << "Failed to open serial port\n"; return; } boost::asio::serial_port_base::flow_control FLOW( boost::asio::serial_port_base::flow_control::hardware ); boost::asio::serial_port_base::baud_rate baud_option(baud); serialPort.set_option(FLOW); serialPort.set_option(baud_option); read_start(); std::cout << SetCommBreak(serialPort.native_handle()) << std::endl; std::cout << GetLastError() << std::endl; boost::posix_time::ptime mst1 = boost::posix_time::microsec_clock::local_time(); boost::this_thread::sleep(boost::posix_time::millisec(125)); boost::posix_time::ptime mst2 = boost::posix_time::microsec_clock::local_time(); std::cout << ClearCommBreak(serialPort.native_handle()) << std::endl; std::cout << GetLastError() << std::endl; boost::posix_time::time_duration msdiff = mst2 - mst1; std::cout << msdiff.total_milliseconds() << std::endl; } Edit: It was only necessary to look at the combo-box selection of com-ports of wxTerm - no active connection was needed to be established in order to make my code work. I am guessing, that there is some sort of initialisation missing, which is done, when wxTerm is creating the list for the serial-port combo-box.

    Read the article

  • How to align Definition Lists in IE6 ?

    - by ellander
    I'm having a major headache trying to align some and elements in ie6. It looks fine in ie7 and firefox but the dt elements don't appear in ie6. can anyone help? here is the code.. <div id="listMembers"> <h3>Members</h3> <dl class="myDL"> <dt>Name</dt> <dd>John Smith</dd> <dt>Address</dt> <dd>the street</dd> ... </dl> <div id="listOptions"> <div> <table>...</table> </div> </div> <div> and the css:- DL.myDL { BORDER-RIGHT: black 2px outset; PADDING-RIGHT: 2px; BORDER-TOP: black 2px outset; DISPLAY: block; PADDING-LEFT: 2px; BACKGROUND: #ccbe99; PADDING-BOTTOM: 2px; BORDER-LEFT: black 2px outset; WIDTH: auto; PADDING-TOP: 2px; BORDER-BOTTOM: black 2px outset; FONT-FAMILY: "Trebuchet MS", Arial, sans-serif } DL.myDL DT { CLEAR: both; PADDING-RIGHT: 3px; DISPLAY: inline; FLOAT: left; WIDTH: 250px; TEXT-ALIGN: right } I basically want the dt text aligned to the right and the dd on the right hand side with left align text. I reset the margin on all elements to be 0 before anything else in the css and the elements are within a dive with position relative.

    Read the article

  • Database advantages? Access, MySQL, msSQL, or any others?

    - by JimZ
    Dear all Stackoverflowers, I just started to learn programming and now I'm putting this question online based on a quote: no question is silly My work needs to develop a order system based on web, which wants a database system. Since using Excel for years as a general office user, I naturally turn this to Access. However, most people say Access is very limited comparing to MySQL or MSSQL, or any other more professional database system. But after developing some functions for my company's order system, I really find Access can fulfill my request. And I also tried MSSQL to develop, which I found it not quite convenient to use. I have searched in stackoverflow and find no general answer about my doubt. Now I am sincerely hoping some experienced and professional developers could clear my doubts. Now I'm listing some Access advantages, which I don't think other database system have. I hope you could help me also find these advantages in others. 1. Access is portable, I can just copy a xxx.accdb file to my company and continue with development. 2. Access is easy to generate helpful table, for example, it will automatically generate a field that can automatically count, could be used as primary key value. 3. it is more compatable with Excel, to display and filter data. 4. importantly, it nerely needs no environment to setup, just needs MS Office to be installed. ............others However, I also find some points that MSSQL is advantaged: 1. security reasons 2. easy to backup, ( just use BACKUP..... sql statement to do it) 3. can edit stored procedure to save some functions to database ...............others specifically, I wish some friends could tell me how to make other database portable? since I usually work both at home and in office. It's a headache to move MSSQL work to my office, since the version of MSSQL is not the same. Thank you all and best regards, :)

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • Best of Breed vs. Suite – Oracle’s SaaS Delivers Both

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} The debate of which is better: “best of breed” business applications vs. an integrated suite is certainly not a new conversation. This has been argued between IT vendors and CIOs for years. It’s also important to clarify that “best of breed” does not necessarily translate into being the richest functionality; rather it’s often about just having the best fit solution to solve a specific business problem or need. So what does cloud have to do with the niche vs. suite debate? Consuming business applications in a cloud or SaaS deployment model can change the best of breed vs. suite discussion - if the cloud is done right. It’s having your cake and eating it too only better: you don’t have to gather all the ingredients or wait to bake your cake, and you can adjust how big of slice you take. Before you eat, it’s worth pausing to recall much of what we learned about IT over the last decade. These basic IT principles still hold true even though the financial model has changed from buying to renting. In other words, what’s under the technology hood still matters. Architecture and development methodologies like building an application based on open standards so it works with other systems - is still important. Data and information silos, complex integrations, and proprietary technologies that lock you in, are still bad. While some may argue that IT no longer matters with cloud, the opposite is actually true. If anything cloud can help return IT back to its rightful place as key strategic asset vs. a liability on the balance sheet. The “I” in CIO was never meant to stand for “integration” yet it’s amazing how much time and money is poured into these types of initiatives for most organizations each year. Rather the “I” needs to stand for “innovation”. This is where Oracle SaaS can uniquely help. Oracle’s application strategy has not really changed over the years. It’s always been about bringing the best and richest functionality across the enterprise to our customers while leveraging a common, standards-based, and enterprise-grade platform. So not jut best fit, but the best capabilities based on the input of thousands of enterprise customers across the globe. Oracle invests billions in R&D every year to add new capabilities to the broadest cloud portfolio in the industry, spanning across functional pillars like CRM, HCM, ERP, etc. And where it makes sense, Oracle combines key strategic acquisitions to complement organic functionality. The result is best of breed delivered in a suite. Again this is not something new. The game changer now with cloud is that it impacts HOW Oracle customers adopt the richest, most modern applications across the business – and continue on getting it. Consuming oracle applications in the cloud means you can adopt new capabilities and updates very quickly and easily. There’s no hardware to buy or software to manage. Oracle does it for you. Low upfront costs and an OpEx financial model is the easy part. Oracle Cloud Applications take it a big step further. For organizations that demand having the latest and richest functionality and accelerating the time to value from their IT investment, Oracle Cloud is the right path. It’s about holistically changing the “hows” and the “whys” of the organization by leveraging transformational innovations like social, mobile, and big data in a consistent and more powerful way. Not just about sales force automation or talent management. These technologies should impact all parts of the company and Oracle Cloud is the enterprise-grade delivery vehicle. Oracle SaaS helps break down barriers of adoption and is eases the headache of upgrades, investing in new supporting hardware, or adding internal expertise to manage it all. With Oracle Cloud, customers can get best of breed capabilities in either a full suite model or a la carte. And because it’s entirely built on open standards, it’s built to co-exist with existing IT investments. Updates can be automatic or delayed based on a customer’s requirements. And it’s complete – a full suite of cross pillar functionality. Even better, if you don’t like it, need more or less, just turn the dial up or down. Just like your utility bill, you pay for what you use, and can consume more or less power whenever you need it. Lower cost, lower investment risk, without compromising on functionality, security, or performance. Technology still matters in the cloud. So our cloud customers also like that when they adopt our cloud applications, they also get the best underlying technology, from the middleware and database platform down to infrastructure and Oracle’s engineered systems. Therefore it’s not just the greatest and latest in application functionality, but everything underneath that makes it work is also the latest and greatest. The best of breed technology stack powering best of breed business applications, and all delivered in a subscription based model. The best of both worlds. Yep, that’s the idea.

    Read the article

  • Are IE 9 will have a place in heart of user ?

    - by anirudha
    in a advertisement of IE 9 MSFT compare two product first is their IE9 and second is chrome 6. I know 6 is not currently [9] but no objection because may be they make ads when 6 is currently version and have RC or beta in their hands. on IE 9 test-drive website they show many of people ads to show the user that IE9 is performance better or other chrome or Firefox not. well they not compare with Firefox because last days firefox not still in news and search trends like before RC release many of user googling for them. Well I myself found IE9 perform smoother then chrome. but what MSFT do after IE9 nothing they waiting for IE 10 not for give updates not as well as Google chrome and Firefox. Are IE9 have anything new for Developer even a small or big. well they tell you blah or useless things everytime when they make for next version no matter for you but a matter for them because they add a new thing even useless for developer. I am not have any feeling with IE bad but I like to make reviews as well as I can make. I show you something who I experience with IE and someother browser like Chrome and Firefox. IE 9 still have no plugin as well as other provided like Firefox have Firebug a great utilities who is best option for developer to debug their code. IE9 developer tool is good but still you never customize them or readymade customization available to work as in firefox many of person make customization for firebug like example :- firepicker for picking color in firebug , firebug autocomplete for intellisense like feature when you write JavaScript inside console panel , pixelperfect , firequery , sitepoint reference and many other great example we all love to use. as other things that Firefox give many things customizable like themes , ui and many thing customization means more thing user or developer want to make themselves and more contribution make them better software so Firefox is great because customization is a great thing inside firefox and chrome. if you read some post of developer on MSDN to what’s new in IE 9 developer tool that you feel they are joking whenever you see some other things of Firefox and chrome. in a Firefox a plugin perform many much things but in IE still use IE 9 developer tool no other option like in Firefox use Firebug and many other utilities to make development easier and time saving and best as we can do.if you see Firefox page on mozilla that sublines of firefox is high performance easy customization advanced security well you can say what’s performance but there is no comparison with IE because IE have only performance and nothing else. but Firefox have these three thing to make product love. and third thing I really love that security yeah security. from long time before whenever IE6 is no hackproff and many other easily hack IE6 whenever Firefox is secure. I found myself that many of website install a software on client’s computer and they still not know about them so they track everything. sometime they hijack the homepage and make their website as their homepage. sometime they do something and you trying  to go to  any website then they go to their site first. the problem I telling about not long before it’s time of late in 2008 whenever Firefox is much better then IE6. if someone have bad experience with anyone of these software share with us I like to hear your voice. whenever IE still not for use Firefox is a good option for us even user or developer. I not know why someone make next version of IE. IE still have time to go away from Web. Firefox not rude as IE they still believe in user feedback and chrome is also open the door for feedback on their product gooogle Chrome. but what thing they made in IE on user feedback nothing. they still thing to teach what they maked not thing about what user need. if you spent some hour on firefox and chrome then you found what’s matter. what thing you have whenever you use IE or other browser like google chrome and Firefox :- as a user IE give you nothing even tell you blah blah and more blah but still next version of IE means next IE6 for the web. as in Google chrome you find plugins addons or customization to make experience better but in IE9 you can’t customize anything even the themes they have by default. Firefox already have a great list of plugins or addons to make experience better with Web but IE9 have nothing. this means IE9 not for user and other like chrome and firefox give you much better experience then IE. next thing after user is developer. first thing is that all developer want smooth development who save their time not take too perhaps saving.posts on IE9 show that a list of thing improved in IE 9 developer tool but are one developer tool enough for web development so developer need more utilities to solve different different type of puzzle who IE 9 never give like in Firefox you have utilities to do a task even small or big one. in chrome same experience you have but IE9 never give any plugin or utilities to make our work faster even they are new headache for developer because IE not give update as soon as other because in Firefox and in chrome if a bug is reported then they solve them fast and distribute them in next version of software very soon but in IE wait for a long time like IE 9 and IE 8 have no official release between them as update. As my conclusion there is no reason to use IE and adopt 9 again. it’s really not for Developer or user even newbie or smart people. as a rule I want to beware you with IE because it’s my responsibilities to move the thing in good way as I can make. well are you sure that there is no reason or profit they thing to have with IE9  if not why they forget luna [windows xp] user. because they are old nothing they want to force user to give them some money by purchasing a new version of OS. so this a thing why they marketed their software. if you thing about what firefox and chrome want to make : Mozilla's mission is to promote openness, innovation and opportunity on the web. chrome mission we all see whenever we use them. but IE9 is a trick they promote because they want to add something to next version of windows. if somebody like IE9 [even surprised by ads they see or post they read] then they purchase windows soon as they possible. Well you feel that I am opposition of IE9 and favor of chrome and Firefox yeah you feel right I hate IE from a heart not from a pencil. well you get same thing when you have trying three product major I described here Chrome firefox and IE. well don’t believe on the blogs , posts or article who are provided by the merchant or vender’s website. open the eyes read and thing what they talk and feel are they really true. if you confused that compare with some other. now you know the true because no one telling so badly as a user can described who use them not only one who make their feature. always open the eyes don’t believe use your mind and find the truth. thanks for reading my post good bye and take care

    Read the article

  • Finding nuggets in ARC discussions

    - by alanc
    A bit over twenty years ago, Sun formed an Architecture Review Committee (ARC) that evaluates proposals to change interfaces between components in Sun software products. During the OpenSolaris days, we opened many of these discussions to the community. While they’re back behind closed doors, and at a different company now, we still continue to hold these reviews for the software from what’s now the Sun Systems Group division of Oracle. Recently one of these reviews was held (via e-mail discussion) to review a proposal to update our GNU findutils package to the latest upstream release. One of the upstream changes discussed was the addition of an “oldfind” program. In findutils 4.3, find was modified to use the fts() function to walk the directory tree, and oldfind was created to provide the old mechanism in case there were bugs in the new implementation that users needed to workaround. In Solaris 11 though, we still ship the find descended from SVR4 as /usr/bin/find and the GNU find is available as either /usr/bin/gfind or /usr/gnu/bin/find. This raised the discussion of if we should add oldfind, and if so what should we call it. Normally our policy is to only add the g* names for GNU commands that conflict with an existing Solaris command – for instance, we ship /usr/bin/emacs, not /usr/bin/gemacs. In this case however, that seemed like it would be more confusing to have /usr/bin/oldfind be the older version of /usr/bin/gfind not of /usr/bin/find. Thus if we shipped it, it would make more sense to call it /usr/bin/goldfind, which several ARC members noted read more naturally as “gold find” than as “g old find”. One of the concerns we often discuss in ARC is if a change is likely to be understood by users or if it will result in more calls to support. As we hit this part of the discussion on a Friday at the end of a long week, I couldn’t resist putting forth a hypothetical support call for this command: “Hello, Oracle Solaris Support, how may I help you?” “My admin is out sick, but he sent an email that he put the findutils package on our server, and I can run goldfind now. I tried it, but goldfind didn’t find gold.” “Did he get the binutils package too?” “No he just said findutils, do we need binutils?” “Well, gold comes in the binutils package, so goldfind would be able to find gold if you got that package.” “How much does Oracle charge for that package?” “It’s free for Solaris users.” “You mean Oracle ships packages of gold to customers for free?” “Yes, if you get the binutils package, it includes GNU gold.” “New gold? Is that some sort of alchemy, turning stuff into gold?” “Not new gold, gold from the GNU project.” “Oracle’s taking gold from the GNU project and shipping it to me?” “Yes, if you get binutils, that package includes gold along with the other tools from the GNU project.” “And GNU doesn’t mind Oracle taking their gold and giving it to customers?” “No, GNU is a non-profit whose goal is to share their software.” “Sharing software sure, but gold? Where does a non-profit like GNU get gold anyway?” “Oh, Google donated it to them.” “Ah! So Oracle will give me the gold that GNU got from Google!” “Yes, if you get the package from us.” “How do I get the package with the gold?” “Just run pkg install binutils and it will put it on your disk.” “We’ve got multiple disks here - which one will it put it on?” “The one with the system image - do you know which one that is? “Well the note from the admin says the system is on the first disk and the users are on the second disk.” “Okay, so it should go on the first disk then.” “And where will I find the gold?” “It will be in the /usr/bin directory.” “In the user’s bin? So thats on the second disk?” “No, it would be on the system disk, with the other development tools, like make, as, and what.” “So what’s on the first disk?” “Well if the system image is there the commands should all be there.” “All the commands? Not just what?” “Right, all the commands that come with the OS, like the shell, ps, and who.” “So who’s on the first disk too?” “Yes. Did your admin say when he’d be back?” “No, just that he had a massive headache and was going home after I tried to get him to explain this stuff to me.” “I can’t imagine why.” “Oh, is why a command too?” “No, _why was a Ruby programmer.” “Ruby? Do you give those away with the gold too?” “Yes, but it comes in the ruby package, not binutils.” “Oh, I’ll have to have my admin get that package too! Thanks!” Needless to say, we decided this might not be the best idea. Since the GNU package hasn’t had to release a serious bug fix in the new find in the past few years, the new GNU find seems pretty stable, and we always have the SVR4 find to use as a fallback in Solaris, so it didn’t seem that adding oldfind was really necessary, so we passed on including it when we update to the new findutils release. [Apologies to Abbott, Costello, their fans, and everyone who read this far. The Gold (linker) page on Wikipedia may explain some of the above, but can’t explain why goldfind is the old GNU find, but gold is the new GNU ld.]

    Read the article

  • Email sent from server with rDNS & SPF being blocked by Hotmail

    - by Canadaka
    I have been unable to send email to users on hotmail or other Microsoft email servers for some time. Its been a major headache trying to find out why and how to fix the issue. The emails being sent that are blocked from my domain canadaka.net. I use Google Aps to host my regular email serverice for my @canadaka.net email addresses. I can sent email from my desktop or gmail to a hotmail without any problem. But any email sent from my server on behalf of canadaka.net is blocked, not even arriving in the junk email. The IP that the emails are being sent from is the same IP that my site is hosted on: 66.199.162.177 This IP is new to me since August 2010, I had a different IP for the previous 3-4 years. This IP is not on any credible spam lists http://www.anti-abuse.org/multi-rbl-check-results/?host=66.199.162.177 The one list spamcannibal.org my IP is listed on seems to be out of my control, says "no reverse DNS, MX host should have rDNS - RFC1912 2.1". But since I use Google for my email hosting, I don't have control over setting up RDNS for all the MX records. I do have Reverse DNS setup for my IP though, it resolves to "mail.canadaka.net". I have signed up for SNDS and was approved. My ip says "All of the specified IPs have normal status." Sender Score: 100 https://www.senderscore.org/lookup.php?lookup=66.199.162.177&ipLookup.x=55&ipLookup.y=14 My Mcafee threat level seems fine I have a TXT SPF record setup, I am currently using xname.org as my DNS, and they don't have a field for SPF, but their FAQ says to add the SPF info as a TXT entry. v=spf1 a include:_spf.google.com ~all Some "SPF checking" tools ive used detect that my domain has a valid SPF, but others don't. Like Microsoft's SPF wizard, i think this is because its specifically looking for an SPF record and not in the TXT. "No SPF Record Found. A and MX Records Available". From my home I can run "nslookup -type=TXT canadaka.net" and it returns: Server: google-public-dns-a.google.com Address: 8.8.8.8 Non-authoritative answer: canadaka.net text = "v=spf1 a include:_spf.google.com ~all" One strange thing I found is i'm unable to ping hotmail.com or msn.com or do a "telnet mail.hotmail.com 25". I am able to ping gmail.com and many other domains I tried. I tried changing my DNS servers to Google's Public DNS and did a ipconfig /flushdns but that had no effect. I am however able to connect with telnet to mx1.hotmail.com This is what the email headers look like when I send to a Google email server and I receive the email with no troubles. You can see that SPF is passing. Delivered-To: [email protected] Received: by 10.146.168.12 with SMTP id q12cs91243yae; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Received: by 10.43.48.7 with SMTP id uu7mr4292541icb.68.1298858509242; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Return-Path: Received: from canadaka.net ([66.199.162.177]) by mx.google.com with ESMTP id uh9si8493137icb.127.2011.02.27.18.01.45; Sun, 27 Feb 2011 18:01:48 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) client-ip=66.199.162.177; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) [email protected] Message-Id: <[email protected] Received: from coruscant ([127.0.0.1]:12907) by canadaka.net with [XMail 1.27 ESMTP Server] id for from ; Sun, 27 Feb 2011 18:01:29 -0800 Date: Sun, 27 Feb 2011 18:01:29 -0800 Subject: Test To: [email protected] From: XXXX Reply-To: [email protected] X-Mailer: PHP/5.2.13 I can send to gmail and other email services fine. I don't know what i'm doing wrong! UPDATE 1 I have been removed from hotmails IP block and am now able to send emails to hotmail, but they are all going directly to the JUNK folder. UPDATE 2 I used Telnet to send a test message to port25.com, seems my SPF is not being detected. Result: neutral (SPF-Result: None) canadaka.net. SPF (no records) canadaka.net. TXT (no records) I do have a TXT record, its been there for years, I did change it a week ago. Other sites that allow you to check your SPF detect it, but some others like Microsofts Wizard doesn't. This iw what my SPF record in my xname.org DNS file looks like: canadaka.net. 86400 IN TXT "v=spf1 a include:_spf.google.com ~all" I did have a nameserver as my 4th option that doens't have the TXT records since it doens't support it. So I removed it from the list and instead added wtfdns.com as my 4th adn 5th nameservers, which does support TXT.

    Read the article

< Previous Page | 8 9 10 11 12 13  | Next Page >