Search Results

Search found 1013 results on 41 pages for 'recommendation'.

Page 37/41 | < Previous Page | 33 34 35 36 37 38 39 40 41  | Next Page >

  • Database design - table relationship question

    - by iama
    I am designing schema for a simple quiz application. It has 2 tables - "Question" and "Answer Choices". Question table has 'question ID', 'question text' and 'answer id' columns. "Answer Choices" table has 'question ID', 'answer ID' and 'answer text' columns. With this simple schema it is obvious that a question can have multiple answer choices & hence the need for the answer choices table. However, a question can have only one correct answer and hence the need for the 'answer ID' in the question table. However, this 'answer ID' column in the question table provides a illusion as though there can be multiple questions for a single answer which is not correct. The other alternative to eliminate this illusion is to have another table just for correct answer that will have just 2 columns namely the question ID and the answer ID with a 1-1 relationship between the two tables. However, I think this is redundant. Any recommendation on how best to design this thereby enforcing the rules that a question can have multiple answer choices but only one correct answer? Many Thanks.

    Read the article

  • How do you search a document for a string in c++?

    - by Jeff
    Here's my code so far: #include<iostream> #include<string> #include<fstream> using namespace std; int main() { int count = 0; string fileName; string keyWord; string word; cout << "Please make sure the document is in the same file as the program, thank you!" << endl << "Please input document name: " ; getline(cin, fileName); cout << endl; cout << "Please input the word you'd like to search for: " << endl; cin >> keyWord; cout << endl; ifstream infile(fileName.c_str()); while(infile.is_open()) { getline(cin,word); if(word == keyWord) { cout << word << endl; count++; } if(infile.eof()) { infile.close(); } } cout << count; } I'm not sure how to go to the next word, currently this infinite loops...any recommendation? Also...how do I tell it to print out the line that that word was on? Thanks in advance!

    Read the article

  • Events still not showing up despite the many tries...sigh

    - by sheng88
    I have tried the recommendation from this forum and through the many google searches...but i still can't get the events to show up on my calendar via jsp....i tried with php and it worked...sigh...i wonder where is the error....sigh.... The processRequest method is fine but when it dispatches to the JSP page...nothing appears from the browser.... protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String email=request.getParameter("email"); try { ArrayList<CalendarEvt> calc=CalendarDAO.retrieveEvent(email); for (CalendarEvt calendarEvt : calc) { System.out.println(calendarEvt.getEventId()); } request.setAttribute("calendar", calc); request.getRequestDispatcher("calendar.jsp").forward(request, response); } catch (Exception e) { } } Here is the JSP section that's giving me headaches...(Without the loop...the Google link does appear...)...I have tried putting quotations and leaving them out....still no luck: <%--Load user's calendar--%> <script type='text/javascript'> $(document).ready(function() { var date = new Date(); var d = date.getDate(); var m = date.getMonth(); var y = date.getFullYear(); $('#calendar').fullCalendar({ editable: false, events: [ <c:forEach items="calendar" var="calc"> { title: '${calc.eventName}', start: ${calc.eventStart} }, </c:forEach> { title: 'Click for Google', start: new Date(y, m, 1), end: new Date(y, m, 1), url: 'http://google.com/' } ]//end of events }); }); </script> <%--Load user's calendar--%> ...any kind of help would be greatly appreciated...thx!!

    Read the article

  • How to use Java on Google App Engine without exceeding minute quotas?

    - by Geo
    A very simple java code inside a doGet() servlet is getting more than a second of cpu time on GAE. I have read some quota related documentation and apparently I am not doing anything wrong. //Request the user Agent info String userAgent = req.getHeader("User-Agent"); I wanted to know what was using the CPU the most, I use a google help recommendation. //The two lines below will get the CPU before requesting User-Agent Information QuotaService qs = QuotaServiceFactory.getQuotaService(); long start = qs.getCpuTimeInMegaCycles(); //Request the user Agent info String userAgent = req.getHeader("User-Agent"); //The three lines below will get the CPU after requesting User-Agent Information // and informed it to the application log. long end = qs.getCpuTimeInMegaCycles(); double cpuSeconds = qs.convertMegacyclesToCpuSeconds(end - start); log.warning("CPU Seconds on geting User Agent: " + cpuSeconds); The only thing that the code above tells me is that inspecting the header will use more than a second (1000ms) of cpu time, which for Google is a warning on the log panel. That seems to be a very simple request and still is using more than a second of cpu. What I am missing?

    Read the article

  • starting rails in test environment

    - by Brian D.
    I'm trying to load up rails in the test environment using a ruby script. I've tried googling a bit and found this recommendation: require "../../config/environment" ENV['RAILS_ENV'] = ARGV.first || ENV['RAILS_ENV'] || 'test' This seems to load up my environment alright, but my development database is still being used. Am I doing something wrong? Here is my database.yml file... however I don't think it is the issue development: adapter: mysql encoding: utf8 reconnect: false database: BrianSite_development pool: 5 username: root password: dev host: localhost # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: mysql encoding: utf8 reconnect: false database: BrianSite_test pool: 5 username: root password: dev host: localhost production: adapter: mysql encoding: utf8 reconnect: false database: BrianSite_production pool: 5 username: root password: dev host: localhost I can't use ruby script/server -e test because I'm trying to run ruby code after I load rails. More specifically what I'm trying to do is: run a .sql database script, load up rails and then run automated tests. Everything seems to be working fine, but for whatever reason rails seems to be loading in the development environment instead of the test environment. Here is a shortened version of the code I am trying to run: system "execute mysql script here" require "../../config/environment" ENV['RAILS_ENV'] = ARGV.first || ENV['RAILS_ENV'] || 'test' describe Blog do it "should be initialized successfully" do blog = Blog.new end end I don't need to start a server, I just need to load my rails code base (models, controllers, etc..) so I can run tests against my code. Thanks for any help.

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

  • Freelance web hosting - what are good LAMP choices?

    - by tkotitan
    I think it's best if I ask this question with an example scenario. Let's say your mom-and-pop local hardware store has never had a website, and they want you, the freelance developer to build them a website. You have all the skills to run a LAMP setup and admin a system, so the difficult question you ask yourself is - where will I host it? As you aren't going to host it out of the machine in your apartment. Let's say you want to be able to customize your own system, install the version of PHP you want, and manage your own database. Perhaps the best kind of hosting is to get a virtual machine so you can customize the system as you see fit. But this essentially a "set it and forget it" site you make, bill by the hour for, and then are done. In other words, the hosting should not be an issue. Given the requirements of hosting a website: Unlimited growth potential needing good amounts of bandwidth to handle visitors Wide range of system and programming options allowing it to be portable Relatively cheap (not necessarily the cheapest) or reasonable scaling cost Reliable hosting with good support Hosted entirely on the host company's hardware Who would you pick to host this website? Yes I am asking for a business/company recommendation. Is there a clear answer for this scenario, or a good source that can reliably give the current answer? I know there are all kinds of schemes out there. I'm just wondering if any one company fills the bill for freelancers and stands out in such a crowded market.

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • Reload mod_fcgid without killing Python Service

    - by Tobias
    Hi I'm currently running a Django project on my school's webserver with FCGI. I did follow the multiple guides that recommends installing a virtual local Python environment and it worked out great. The only issue i had was that "touching" my fcgi-file to reload source-files wasn't enough, but instead i had to kill the python service via SSH. This because mod_fcgid is used. However, the admin didn't think it was a great idea that i ran my own local python. He thought it better if i just told him what modules to install on root, which was a pretty nice service really. But doing this, i can no longer kill python since it's under root(though immoral as I am, I've definitely tried). The admins recommendation was that I should try too make the fcgi script reload itself by checking time stamp. I've tried to find documentation on how to do this, but fund very little and since I'm a absolute beginner i have no idea what would work. Anyone have experience running python/django under mod_fcgid or tips on where to find related guides/documentation?

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • Lightbox-style dialog shows below YouTube movie on Mac OS 10.6

    - by Mark
    This is a "but it works on my machine" one and could be tricky: I have a lightbox-style HTML dialog that shows a menu on top of a web page. It can be injected into any web page via a JavaScript bookmarklet. One of my users is trying to use it on YouTube.com with the result that the flash movie is rendered on top of the dialog (a div with high z-index). I can't reproduce this. It works just fine for me. The dialog shows up on top of everything else on youtube.com, the video included. I had him save the page in Safari as Webarchive and send it to me. Even that shows the menu rendered correctly for me. I use the exact same version of Safari (4.0.5/531.22.7) and Flash (10.1 r53, latest beta). Only difference I could find is that he uses Snow Leopard (10.6.6) and I "only" 10.5.8. Has anybody noticed similar problems? I'm afraid that the usual wmode recommendation won't solve this (I tried & it works on my machine anyway)... Thanks! Mark

    Read the article

  • How to "upgrade" the database in real world?

    - by Tattat
    My company have develop a web application using php + mysql. The system can display a product's original price and discount price to the user. If you haven't logined, you get the original price, if you loginned , you get the discount price. It is pretty easy to understand. But my company want more features in the system, it want to display different prices base on different user. For example, user A is a golden parnter, he can get 50% off. User B is a silver parnter, only have 30 % off. But this logic is not prepare in the original system, so I need to add some attribute in the database, at least a user type in this example. Is there any recommendation on how to merge current database to my new version of database. Also, all the data should preserver, and the server should works 24/7. (within stop the database) Is it possible to do so? Also , any recommend for future maintaince advice? Thz u.

    Read the article

  • JSF getter methods called BEFORE beforePhase fires

    - by Bill Leeper
    I got a recommendation to put all data lookups in the beforePhase for a given page, however, now that I am doing some deeper analysis it appears that some getter methods are being called before the beforePhase is fired. It became very obvious when I added support for a url parameter and I was getting NPEs on objects that are initialized in the beforePhase call. Any thoughts? Something I have set wrong. I have this in my JSP page: <f:view beforePhase="#{someController.beforePhaseSummary}"> That is only the 5th line in the JSP file and is right after the taglibs. Here is the code that is in the beforePhaseSummary method: public void beforePhaseSummary(PhaseEvent event) { logger.debug("Fired Before Phase Summary: " + event.getPhaseId()); if (event.getPhaseId() == PhaseId.RENDER_RESPONSE) { HttpServletRequest request = (HttpServletRequest)FacesContext.getCurrentInstance().getExternalContext().getRequest(); if (request.getParameter("application_id") != null) { loadApplication(Long.parseLong(request.getParameter("application_id"))); } /* Do data fetches here */ } } The logging output above indicates that an event is fired. The servlet request is used to capture the url parameters. The data fetches gather data. However, the logging output is below: 2010-04-23 13:44:46,968 [http-8080-4] DEBUG ...SomeController 61 - Get Permit 2010-04-23 13:44:46,968 [http-8080-4] DEBUG ...SomeController 107 - Getting UnsubmittedCount 2010-04-23 13:44:46,984 [http-8080-4] DEBUG ...SomeController 61 - Get Permit 2010-04-23 13:44:47,031 [http-8080-4] DEBUG ...SomeController 133 - Fired Before Phase Summary: RENDER_RESPONSE(6) The logs indicate 2 calls to the getPermit method and one to getUnsubmittedCount before the beforePhase is fired.

    Read the article

  • Spring security or BCrypt algorithm which one is good for accounts like project?

    - by Ranjith Kumar Nethaji
    I am using spring security for hashing my password.And is it safe ,because am using spring security for first time. my code here <security:http auto-config="true"> <security:intercept-url pattern="/welcome*" access="ROLE_USER" /> <security:form-login login-page="/login" default-target-url="/welcome" authentication-failure-url="/loginfailed" /> <security:logout logout-success-url="/logout" /> </security:http> authentication-failure-url="/loginfailed" /> <security:logout logout-success-url="/logout" /> </security:http> <authentication-manager> <authentication-provider> <password-encoder hash="sha" /> <user-service> <user name="k" password="7c4a8d09ca3762af61e59520943dc26494f8941b" authorities="ROLE_USER" /> </user-service> </authentication-provider> </authentication-manager> .And I havnt used bcrypt algorithm.what is your feedback for both?any recommendation?

    Read the article

  • Read data from form

    - by Superhuman
    This is a strange question, I've never tried to do this before. I have a repetitive process requiring that I copy and paste data from text boxes in one program into another program for further processing. I'd like to automate this process using VB .NET. The application from which the data is gathered isn't mine, so I don't have ActiveX-like access to its controls. How would you write an application to gain access to a form from another application, to be able to find the controls on the form, and gather the values from them? Just experimenting, I've used the following code. This resulted in only the name of the form to which this code belongs. It didn't find the names of any other forms I have open, and I have a lot open to choose from. This is frustrating because it's only step one of what I'll need to do to make my life easier... Public Declare Function EnumWindows Lib "user32" (ByVal lpEnumFunc As CallBack, ByVal lParam As Integer) As Integer Public Delegate Function CallBack(ByVal hwnd As IntPtr, ByVal lParam As IntPtr) As Boolean Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim cb As New CallBack(AddressOf MyCallBack) EnumWindows(cb, 8) End Sub Public Function MyCallBack(ByVal hwnd As Long, ByVal lparam As Long) As Boolean Dim frm As System.Windows.Forms.Control frm = System.Windows.Forms.Form.FromHandle(hwnd) If frm Is Nothing Then Return True If frm.Text <> "" Then TextBox1.Text += frm.Text & ", " End If Return True End Function Does anyone have a recommendation? Thanks, SH

    Read the article

  • How can I unobtrusively disable submit buttons with Javascript and Prototype?

    - by Frew
    So I found this recommendation, but I can't quite seem to figure out how. This is the code I originally started with: function greySubmits(e) { var value = e.srcElement.defaultValue; // This doesn't work, but it needs to $(e).insert('<input type="hidden" name="commit" value="' + value + '" />'); // This causes IE to not submit at all $$("input[type='submit']").each(function(v) {v.disabled = true;}) } // This works fine Event.observe(window, 'load', function() { $$("input[type='submit']").each(function(e) { Event.observe(e, 'click', greySubmits); }); }); Anyway, I am pretty close, but I can't seem to get any further. Thanks for any help at all! Update: Sorry, I guess I wasn't entirely clear. I'd like to disable all of the submit buttons when someone clicks a submit button. But I do need to send along the value of the submit button so the server knows which button I clicked, hence the insert call. (Note: insert does not create a child of the element you call it on.) And then after disabling the submit buttons I need to call the containing form of the submit buttons submit call, as IE will not submit after you disable the button. Does that make sense?

    Read the article

  • Infrastructure for high transactional system (language & hosting suggestion help)

    - by RPS
    Some of our friends (University students) are trying to develop a twitter type application, I want to plan for at least 1000 transactions per second (I know it's wishful thinking) for initial launch. This involves several people connecting and getting updates and posting (text + images) to site. In the back end db will server the data and also calculates rankings of what to push to user based on complex algorithm on the fly real-time. Our group is familiar with Java and Tomcat/MySQL. We can also easily learn/code in PHP/MySQL. What is the best suited platform for our purpose ? Though Java seem to be easy to implement for us I am afraid that hosting will be a bit difficult. I could find cloud based php hosting services (like rackspace cloudsites) at reasonable cost. Amazon EC2 is a bit over our heads to manage on day-to-day. Also any recommendation on hosting ? (PHP or Java) We don't have millions in seed money but about $20K to start with. Any advice on above or any thing in general approach is much appreciated.

    Read the article

  • ASP.NET AJAX, jQuery and AJAX Control Toolkit&ndash;the roadmap

    - by Harish Ranganathan
    The opinions mentioned herein are solely mine and do not reflect those of my employer Wanted to post this for a long time but couldn’t.  I have been an ASP.NET Developer for quite sometime and have worked with version 1.1, 2.0, 3.5 as well as the latest 4.0. With ASP.NET 2.0 and Visual Studio 2005, came the era of AJAX and rich UI style web applications.  So, ASP.NET AJAX (codenamed “ATLAS”) was released almost an year later.  This was called as ASP.NET 2.0 AJAX Extensions.  This release was supported further with Visual Studio 2005 Service Pack 1. The initial release of ASP.NET AJAX had 3 components ASP.NET AJAX Library – Client library that is used internally by the server controls as well as scripts that can be used to write hand coded ajax style pages ASP.NET AJAX Extensions – Server controls i.e. ScriptManager,Proxy, UpdatePanel, UpdateProgress and Timer server controls.  Works pretty much like other server controls in terms of development and render client side behavior automatically AJAX Control Toolkit – Set of server controls that extend a behavior or a capability.  Ex.- AutoCompleteExtender The AJAX Control Toolkit was a separate download from CodePlex while the first two get installed when you install ASP.NET AJAX Extensions. With Visual Studio 2008, ASP.NET AJAX made its way into the runtime.  So one doesn’t need to separately install the AJAX Extensions.  However, the AJAX Control Toolkit still remained as a community project that can be downloaded from CodePlex.  By then, the toolkit had close to 30 controls. So, the approach was clear viz., client side programming using ASP.NET AJAX Library and server side model using built-in controls (UpdatePanel) and/or AJAX Control Toolkit. However, with Visual Studio 2008 Service Pack 1, we also added support for the ever increasing popular jQuery library.  That is, you can use jQuery along with ASP.NET and would also get intellisense for jQuery in Visual Studio 2008. Some of you who have played with Visual Studio 2010 Beta and .NET Framework 4 Beta, would also have explored the new AJAX Library which had a lot of templates, live bindings etc.,  But, overall, the road map ahead makes it much simplified. For client side programming using JavaScript for implementing AJAX in ASP.NET, the recommendation is to use jQuery which will be shipped along with Visual Studio and provides intellisense as well. For server side programming one you can use the server controls like UpdatePanel etc., and also the AJAX Control Toolkit which has close to 40 controls now.  The AJAX Control Toolkit still remains as a separate download at CodePlex.  You can download the different versions for different versions of ASP.NET at http://ajaxcontroltoolkit.codeplex.com/ The Microsoft AJAX Library will still be available through the CDN (Content Delivery Network) channels.  You can view the CDN resources at http://www.asp.net/ajaxlibrary/CDN.ashx Similarly even jQuery and the toolkit would be available as CDN resources in case you chose not to download and have them as a part of your application. I think this makes AJAX development pretty simple.  Earlier, having Microsoft AJAX Library as well as jQuery for client side scripting was kind of confusing on which one to use.  With this roadmap, it makes it simple and clear. You can read more on this at http://ajax.asp.net I hope this post provided some clarity on the AJAX roadmap as I could decipher from various product teams. Cheers!!!

    Read the article

  • Book Review: Middleware Management with Oracle Enterprise Manager Grid Control 10g R5

    - by olaf.heimburger
    When you are familar with the Oracle Database and Middleware stack, chances are that you came across the Enterprise Manager. It comes in many versions for the database or the middleware and differs in its features. If meet someone who talks about Enterprise Manager, it might be possible that this person is talking about something completely different - Enterprise Manager Grid Control. Enterprise Manager Grid Control is the Oracle product for the data center that monitors all databases - and middleware components as well as operating systems. Since the database part is taken for granted, is needs some additional steps to get into the world of centralized middleware management. That's what this book is for - bringing you in the world of middleware management. The Authors This book is written by Debu Panda, former Product Management Director of the Oracle Fusion Middleware Management development team, and Arvind Maheshwari, Senior Software Development Manager of the Oracle Enterprise Manager development team. The Book Oracle Enterprise Manager conceptionally works for many different management areas. As a user you often think of managing databases with it. This is a wide area and deserves another book. The least known area is the middleware management and that's what the booked aimes for. The first 3 chapters cover the key features of Enterprise Manager Grid Control, Installing Enterprise Manager Grid Control, and Enterprise Manager Key Concepts and Subsystems. The foundation you need to understand the whole software and the following chapters. Read them in order and you are well prepared for the next 10 chapters on managing the various bits and pieces in your data center. The list of bits and pieces is always a surprise, no matter how often you open the book. You can manage Oracle WebLogic Server, Oracle Application Server, Oracle Forms and Reports Services, SOA Suite 10g, Oracle Service Bus 10g, Oracle Internet Directory, Oracle Virtual Directory, Oracle Access Manager, Oracle Identity Manager, Oracle Identity Federation, Oracle Coherence Cluster, Non-Oracle Middleware like Apache, Tomcat, JBoss, OBM WebSphere and much much more. The chapters for these components can be read in any order you like, you only need the foundation chapters and continue with the parts in your data center. Once you are done with them, don't forget to read the last chapter, Best Practices for Managing Middleware Components using Enterprise Manager. Read it, understand it, and implement it in your organization. This will save you valueable time and budget. Recommendation This book is mainly written for the Enterprise Manager newbies and saves you a lot of time while going through the standard product documentation. All chapters are considerable short and tell exactly what need to know to get started with. Nothing more and nothing less. That's the beauty of it and why I love it. Due to its limitation it will cover everything you'd like to know, but it gets you started and interested for more insights. But that is the job of the product documentation. The Details Title Middleware Management with Oracle Enterprise Manager Grid Control 10g R5 Authors Debu Panda and Arvind Maheshwari Paperback 310 pages ISBN 13 978-1-847198-34-1

    Read the article

  • Web Services for Info Explorer Zones

    - by Anthony Shorten
    One of the most interesting uses for XAI and Configurable objects is the exposure of a query portal as a Web Service. Let me illustrate this with an example. Say you have an interface that requires a list of data from a number of product tables. In the past you would have to build a java program to do this with SQL then use an application service but it is now possible with just configuration. The first step in the process is to create the SQL you want to use for the interface. It can be any valid static SQL or use host variables for the WHERE clause (we call that filtered). Once you are happy with the SQL (and it performs acceptably) you can incorporate that SQL into a Info-Explorer Zone. You can use any of the explorer zone types but I typically recommend F1-DE-SINGLE as it supports a single SQL statement with multiple filters (up to 15) as well as hidden filters (up to 5). Hidden filters are typically not displayed in the UI for criteria (remember explorer zones can be used on the user Interface as well) but for web services they can be used as normal filters (this means you can use up to 20 filters all up). Once you are happy with the zone, you now need to define it as a Business Service. We have a generic service called FWLZDEXP which allows a explorer zone to be defined as a Business Service. If you open any Business Service based upon FWLZDEXP you will see some examples. The schema is standard and pretty self explanatory in terms of the structure. The schema pattern looks like this: Zone element - maps to the ZONE_CD element and the default value is the zone name you just created. This links the business service to the zone. Filter elements - You name the filters as you like but the mapField is set to Fx_VALUE where x is the filter number corresponding to the filter element in the zone definition. Hidden filter elements - You name the filters as you like but the mapField is set to Hx_VALUE where x is the filter number corresponding to the hidden filter element in the zone definition. results group - this holds the elements of the result set. Each element in your result set has a tagname and is linked to the COL_VALUE mapField and the row element is lists the SEQNO of the column. This corresponds to the column number in the results set in the zone. An example schema is shown below for the F1-USGRACML zone, which returns the access modes for a user group and application service filters. In the example, the userGroup and applicationService elements are the filters and the rows would contain a list of accessModeDescr. This is just a simple example to illustrate the point. There are lots of examples in the product that you can investigate. One recommendation, to save time, is that you copy the schema from one of the examples to save you typing it from scratch. You can simply modify the tags and other elements to suit your needs. Once the Business Service is defined it can simply be defined as a Web Service by registering an XAI Inbound Service using the Business Service definition as a basis. You now have a Web Service based upon a Info Explorer Zone. This is one of my favorite components as it allows interfaces to be simplified. This will be my last blog entry for this year. I hope you all have a great and safe Christmas and an even greater new year. Next year promises to be an exciting year and I look forward to communicating exciting developments we are working on at the moment as they are released.

    Read the article

  • Creating typed WSDL’s for generic WCF services of the ESB Toolkit

    - by charlie.mott
    source: http://geekswithblogs.net/charliemott Question How do you make it easy for client systems to consume the generic WCF services exposed by the ESB Toolkit using messages that conform to agreed schemas\contracts?  Usually the developer of a system consuming a web service adds a service reference using a WSDL. However, the WSDL’s for the generic services exposed by the ESB Toolkit do not make it easy to develop clients that conform to agreed schemas\contracts. Recommendation Take a copy of the generic WSDL’s and modify it to use the proper contracts. This is very easy.  It will work with the generic on ramps so long as the <part>?</part> wrapping is removed from the WCF adapter configuration in the BizTalk receive locations.  Attempting to create a WSDL where the input and output messages are sent/returned with a <part> wrapper is a nightmare.  I have not managed it.  Consequences I can only see the following consequences of removing the <part> wrapper: ESB Test Client – I needed to modify the out-of-the-box ESB Test Client source code to make it send non-wrapped messages.  Flat file formatted messages – the endpoint will no longer support flat file message formats.  However, even if you needed to support this integration pattern through WCF, you would most-likely want to create a separate receive location anyway with its’ own independently configured XML disassembler pipeline component. Instructions These steps show how to implement a request-response implementation of this. WCF Receive Locations In BizTalk, for the WCF receive location for the ESB on-ramp, set the adapter Message settings\bindings to “UseBodyPath”: Inbound BizTalk message body  = Body Outbound WCF message body = Body Create a WSDL’s for each supported integration use-case Save a copy of the WSDL for the WCF generic receive location above that you intend the client system to use. Give it a name that mirrors the interface agreement (e.g. Esb_SuppliersSearchCommand_wsHttpBinding.wsdl).   Add any xsd schemas files imported below to this same folder.   Edit the WSDL to import schemas For example, this: <xsd:schema targetNamespace=http://microsoft.practices.esb/Imports /> … would become something like: <xsd:schema targetNamespace="http://microsoft.practices.esb/Imports">     <xsd:import schemaLocation="SupplierSearchCommand_V1.xsd"                            namespace="http://schemas.acme.co.uk/suppliersearchcommand/1.0"/>     <xsd:import  schemaLocation="SuppliersDocument_V1.xsd"                              namespace="http://schemas.acme.co.uk/suppliersdocument/1.0"/>     <xsd:import schemaLocation="Types\Supplier_V1.xsd"                              namespace="http://schemas.acme.co.uk/types/supplier/1.0"/>     <xsd:import  schemaLocation="GovTalk\bs7666-v2-0.xsd"                               namespace="http://www.govtalk.gov.uk/people/bs7666"/>     <xsd:import  schemaLocation="GovTalk\CommonSimpleTypes-v1-3.xsd"                             namespace="http://www.govtalk.gov.uk/core"/>     <xsd:import  schemaLocation="GovTalk\AddressTypes-v2-0.xsd"                              namespace="http://www.govtalk.gov.uk/people/AddressAndPersonalDetails"/> </xsd:schema> Modify the Input and Output message For example, this: <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">   <wsdl:part name="part" type="xsd:anyType"/> </wsdl:message> <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">   <wsdl:part name="part" type="xsd:anyType"/> </wsdl:message> … would become something like: <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_InputMessage">   <wsdl:part name="part"                       element="ssc:SupplierSearchEvent"                         xmlns:ssc="http://schemas.acme.co.uk/suppliersearchcommand/1.0" /> </wsdl:message> <wsdl:message name="ProcessRequestResponse_SubmitRequestResponse_OutputMessage">   <wsdl:part name="part"                       element="sd:SuppliersDocument"                       xmlns:sd="http://schemas.acme.co.uk/suppliersdocument/1.0"/> </wsdl:message> This WSDL can now be added as a service reference in client solutions.

    Read the article

  • Want a headless build server for SSDT without installing Visual Studio? You’re out of luck!

    - by jamiet
    An issue that regularly seems to rear its head on my travels is that of headless build servers for SSDT. What does that mean exactly? Let me give you my interpretation of it. A SQL Server Data Tools (SSDT) project incorporates a build process that will basically parse all of the files within the project and spit out a .dacpac file. Where an organisation employs a Continuous Integration process they will likely want to automate the building of that dacpac whenever someone commits a change to the source control repository. In order to do that the organisation will use a build server (e.g. TFS, TeamCity, Jenkins) and hence that build server requires all the pre-requisite software that understands how to build an SSDT project. The simplest way to install all of those pre-requisites is to install SSDT itself however a lot of folks don’t like that approach because it installs a lot unnecessary components on there, not least Visual Studio itself. Those folks (of which i am one) are of the opinion that it should be unnecessary to install a heavyweight GUI in order to simply get a few software components required to do something that inherently doesn’t even need a GUI. The phrase “headless build server” is often used to describe a build server that doesn’t contain any heavyweight GUI tools such as Visual Studio and is a desirable state for a build server. In his blog post Headless MSBuild Support for SSDT (*.sqlproj) Projects Gert Drapers outlines the steps necessary to obtain a headless build server for SSDT: This article describes how to install the required components to build and publish SQL Server Data Tools projects (*.sqlproj) using MSBuild without installing the full SQL Server Data Tool hosted inside the Visual Studio IDE. http://sqlproj.com/index.php/2012/03/headless-msbuild-support-for-ssdt-sqlproj-projects/ Frankly however going through these steps is a royal PITA and folks like myself have longed for Microsoft to support headless build support for SSDT by providing a distributable installer that installs only the pre-requisites for building SSDT projects. Yesterday in MSDN forum thread Building a VS2013 headless build server - it's sooo hard Mike Hingley complained about this very thing and it prompted a response from Kevin Cunnane from the SSDT product team: The official recommendation from the TFS / Visual Studio team is to install the version of Visual Studio you use on the build machine. I, like many others, would rather not have to install full blown Visual Studio and so I asked: Is there any chance you'll ever support any of these scenarios: Installation of all build/deploy pre-requisites without installing the VS shell? TFS shipping with all of the pre-requisites for doing SSDT project build/deploys 3rd party build servers (e.g. TeamCity) shipping with all of the requisites for doing SSDT project build/deploys I have to say that the lack of a single installer containing all the pre-requisites for SSDT build/deploy puzzles me. Surely the DacFX installer would be a perfect vehicle for that? Kevin replied again: The answer is no for all 3 scenarios. We looked into this issue, discussed it with the Visual Studio / TFS team, and in the end agreed to go with their latest guidance which is to install Visual Studio (e.g. VS2013 Express for Web) on the build machine. This is how Visual Studio Online is doing it and it's the approach recommended for customers setting up their own TFS build servers. I would hope this is compatible with 3rd party build servers but have not verified whether this works with TeamCity etc. Note that DacFx MSI isn't a suitable release vehicle for this as we don't want to include Visual Studio/MSBuild dependencies in that package. It's meant to just include the core DacFx DLLs used by SSMS, SqlPackage.exe on the command line, etc. What this means is we won't be providing a separate MSI installer or nuget package with just the necessary build DLLs you need to run your build and tests. If someone wanted to create a script that generated a nuget package based on our DLLs and targets files, then release that somewhere on the web for easier integration with 3rd party build servers we've no problem with that. Again, here’s the link to the thread and its worth reading in its entirety if this is something that interests you. So there you have it. Microsoft will not be be providing support for headless build servers for SSDT but if someone in the community wants to go ahead and roll their own, go right ahead. @Jamiet

    Read the article

  • A Letter for Your CEO About Social Marketing’s Future

    - by Mike Stiles
    We’ll leave it to you to decide if or how to sneak this in front of them. Dear Chief: This social marketing thing looks serious. It’s gone beyond having a Facebook page and putting our info and a few promotions on it. It’s seriously disrupting how we’ve always done marketing. And its implications reach well beyond marketing. My concern is that we stay positioned ahead of these changes and are prepared to embrace, adapt and capitalize on these new capabilities as opposed to spending valuable time and money trying to shoehorn social into “the way we’ve always done things.” I’m also concerned about what happens if our competition executes on this before we do. The days of being able to impose our ad messaging on the masses to great effect are numbered. The public now has the tech tools and ability to filter out things that are irrelevant to them. And frankly, spending ad dollars to reach unlikely prospects isn’t the most efficient path for us either. Today, our customers have to genuinely love what we do. That starts with a renewed, customer-centric focus on the quality and usability of our product. If their experience with it is bad, they now have very connected, loud voices that will testify against us. We can’t afford that. Next, their customer service experience, before and after the sale, has to be a pleasant surprise. That requires truly knowing our customers and listening to them. Lip service won’t cut it. We have to get and use as much data on the customer as possible, interact with them wherever they want to interact with us, and commit to impressing them. If we do, they’ll get out there and advertise for us. Since peer-to-peer recommendation is the most effective marketing, that’s money in the bank. Social marketing is about forming relationships, same as how individuals use social. We want them to know us, trust us, and get real value from knowing us. That requires honesty and transparency that before now might have been uncomfortable. I propose that if we clearly make everything we do about our customers’ wants and needs, we’ll have nothing to hide. It will solidify customer loyalty, retention, and thus, revenue. These things can’t happen without certain tools and structural changes in the organization. There are social cloud platforms that integrate social management into all of the necessary areas: CRM, customer service, sales, marketing automation, content marketing, ecommerce, etc. This is will give us a real-time, complete view of the customer so their every interaction with us is attentive, personalized, accurate, relevant, and satisfying. Without it, we’re just a collage of disjointed systems, each gathering data that informs only its own departmental silo. The customer is voluntarily giving us everything we need to know about them to win them over, but we have to start listening and putting the pieces together. There’s still time. Brands are coming to terms with this transition to the socially enabled enterprise, but so far they aren’t moving very fast. Like us, they’re dealing with long-entrenched technologies and processes. CMO’s and CIO’s have to form new partnerships. Content operations have to be initiated and properly staffed and funded. Various departments must be able to utilize interconnected big data. What will separate the winners from the losers? Well chief, that’s why I’m writing you. It’s in your hands. These initiatives won’t get the kind of priority and seriousness that inspire actual deadlines & action unless they come from your desk. You have to be the champion of customer centricity. You have to be our change agent. You have to be our innovator. Otherwise, it’s going to be business as usual, and that puts us in a very vulnerable place. Sincerely, Your Team @mikestilesPhoto: Gary Scott, stock.xchng

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • Editor's Notebook - Social Aura: Insights from the Oracle Social Media Summit

    - by user462779
    Panelists talk social marketing at the Oracle Social Media Summit On November 14, I traveled to Las Vegas for the first-ever Oracle Social Media Summit. The two day event featured an impressive collection of social media luminaries including: David Kirkpatrick (founder and CEO of Techonomy Media and author of The Facebook Effect), John Yi (Head of Marketing Partnerships, Facebook), Matt Dickman (EVP of Social Business Innovation, Weber Shandwick), and Lyndsay Iorio (Social Media & Communications Manager, NBC Sports Group) among others. It was also a great opportunity to talk shop with some of our new Vitrue and Involver colleagues who have been returning great social media results even before their companies were acquired by Oracle. I was live tweeting the event from @OracleProfit which was great for those who wanted to follow along with the proceedings from the comfort of their office or blackjack table. But I've also found over the years that live tweeting an event is a handy way to take notes: I can sift back through my record of what people said or thoughts I had at the time and organize the Twitter messages into some kind of summary account of the proceedings. I've had nearly a month to reflect on the presentations and conversations at the event and a few key topics have emerged: David Kirkpatrick's comment during the opening presentation really set the stage for the conversations that followed. Especially if you are a marketer or publisher, the idea that you are in a one-way broadcast relationship with your audience is a thing of the past. "Rising above the noise" does not mean reaching for a megaphone, ALL CAPS, or exclamation marks. Hype will not motivate social media denizens to do anything but unfollow and tune you out. But knowing your audience, creating quality content and/or offers for them, treating them with respect, and making an authentic effort to please them: that's what I believe is now necessary. And Kirkpatrick's comment early in the day really made the point. Later in the day, our friends @Vitrue demonstrated this point by elaborating on a comment by Facebook's John Yi. If a social strategy is comprised of nothing more than cutting/pasting the same message into different social media properties, you're missing the opportunity to have an actual conversation. That's not shouting at your audience, but it does feel like an empty gesture. Walter Benjamin, perplexed by auraless Twitter messages Not to get too far afield, but 20th century cultural critic Walter Benjamin has a concept that is useful for understanding the dynamics of the empty social media gesture: Aura. In his work The Work of Art in the Age of Mechanical Reproduction, Benjamin struggled to understand the difference he percieved between the value of a hand-made art object (a painting, wood cutting, sculpture, etc.) and a photograph. For Benjamin, aura is similar to the "soul" of an artwork--the intangible essence that is created when an artist picks up a tool and puts creative energy and effort into a work. I'll defer to Wikipedia: "He argues that the "sphere of authenticity is outside the technical" so that the original artwork is independent of the copy, yet through the act of reproduction something is taken from the original by changing its context. He also introduces the idea of the "aura" of a work and its absence in a reproduction." So make sure you put aura into your social interactions. Don't just mechanically reproduce them. Keeping aura in your interactions requires the intervention of an actual human being. That's why @NoahHorton's comment about content curation struck me as incredibly important. Maybe it's just my own prejudice, being in the content curation business myself. And it's not to totally discount machine-aided content management systems, content recommendation engines, and other tech-driven tools for building an exceptional content experience. It's just that without that human interaction--that editor who reviews the analytics and responds to user feedback--interactions over social media feel a bit empty. It is SOCIAL media, right? (We'll leave the conversation about social machines for another day). At the end of the day, experimentation is key. Just like trying to find that right joke to tell at the beginning of your presentation or that good opening like at a cocktail party, social media messages and interactions can take some trial and error. Don't be afraid to try things, tinker with incomplete ideas, abandon things that don't work, and engage in the conversation. And make sure your heart is in it, otherwise your audience can tell. And finally:

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41  | Next Page >