Search Results

Search found 321 results on 13 pages for 'canada'.

Page 12/13 | < Previous Page | 8 9 10 11 12 13  | Next Page >

  • How to use AJAX to populate state list depending on Country list?

    - by jasondavis
    I have the code below that will change a state dropdown list when you change the country list. How can I make it change the state list ONLY when country ID number 2234 and 224 are selected? If another country is selected is should change into this text input box <input type="text" name="othstate" value="" class="textBox"> The form <form method="post" name="form1"> <select style="background-color: #ffffa0" name="country" onchange="getState(this.value)"> <option>Select Country</option> <option value="223">USA</option> <option value="224">Canada</option> <option value="225">England</option> <option value="226">Ireland</option> </select> <select style="background-color: #ffffa0" name="state"> <option>Select Country First</option> </select> The javascript <script> function getState(countryId) { var strURL="findState.php?country="+countryId; var req = getXMLHTTP(); if (req) { req.onreadystatechange = function() { if (req.readyState == 4) { // only if "OK" if (req.status == 200) { document.getElementById('statediv').innerHTML=req.responseText; } else { alert("There was a problem while using XMLHTTP:\n" + req.statusText); } } } req.open("GET", strURL, true); req.send(null); } } </script>

    Read the article

  • I want to input JSONObject from jquery.post() to a a simple JS chart(bar chart)?

    - by ann-stack
    HI I am very new to both json and js charts. In the example of bar chart, they are giving a hard coded array like this, var myData = new Array(['U.S.A.', 69.5], ['Canada', 2.8], ['Japan & SE.Asia', 5.6] ); var myChart = new JSChart('graph', 'bar'); myChart.setDataArray(myData); Instead of that I want to use the response of $.post() method which is in json. Here is the piece of code. var myData=[]; $.post("JSONServlet", function(data) { $.each(data.Userdetails, function(i, data) { myData[i] = []; myData[i]['text'] = data['firstname']; myData[i]['id'] = data['ssn']; alert("first name " +myData[i]['text']+ " salary " +myData[i]['id']); // I am getting correct data here, but how to assign this myData to barchart }); }, "json"); is this the logic to use or how else can i get username and salary from the response and pass it to the barchart. Please help. I am stuck with this. thanks in advance.

    Read the article

  • Bulletin board - Database optimisation

    - by andrew
    This question is a follow on from this Question The project and problem The project I am currently working on is a bulletin board for a large non-profit organisation. The bulletin board will be used to allow inter-office communication within the organisation. I am building the application and have been having trouble extracting the results that I need from my database because I don't think it is properly normalized and because of limitations in my knowledge of relational database theory and mysql. I would appreciate input into the design of the board in general and in particular, ways that the database structure can be improved to facilitate efficient queries and help me develop this application and future application faster Business Logic The bulletin board will be used in the following way Posting bulletins and responses to bulletins Employees or 'users' in offices around the country will be able to post messages to the bulletin board.Bulletins must be posted to a location and categorised- i'll call these "bulletins". Users will be able to post any number of replies to any one bulletin and users will be able to reply to their own bulletin - i'll call these 'replies'. Rating bulletins and replies Users will be able to either 'like' or 'dislike' a bulletin or a reply and the total number of likes or dislikes will be shown for each bulletin or reply. Viewing the bulletin board and responses Bulletins can be displayed chronologically. Users can sort bulletins chronologically or chronologically by the latest reply to that bulletin(let me know if you need more explanation) When a particular bulletin is selected, replies to that bulletin will be displayed chronologically @PerformanceDBA - edited 10:34 est 28/12/10I have begun implementing the data model. I assume that the 6th data model is the physical model because it contains the associative tables. I am going to post any questions that I have below. I will put up a database dump once I am done. I will then put up a list of all the queries that I need to run on the database and begin writing them. I hope you had a good Christmas. I'm in Canada and there's snow! Implementation of Physical model

    Read the article

  • Saving multiple items per single database cell...

    - by eugeneK
    Hi, i have a countries list. Each user can check multiple countries. Once saved, this "user country list" will be used to get whether other users fit into countries certain user chose. Question is what would be the most efficient approach to this problem... I have one, one to save user selection as delimited list like Canada,USA,France ... in single varchar(max) field but problem with it would be that once user from Germany enters page i perform this check on. To search for Germany i would be needed to get all items and un-delimit each field to check against value or to use sql 'like' which again is pretty damn slow.. If you have better solution or some tips i would be glad to hear. Just to make sure, many users will have their own selections of countries from which and only they want to have users to land on their page. While millions of users will reach those pages. So the faster approach will be the better. technology, MSSQL and ASP.NET thanks

    Read the article

  • Why does Rails screw up timezones when I am editing a resource?

    - by DJTripleThreat
    Steps to produce this: prompt>rails test_app prompt>cd test_app prompt>script/generate scaffold date_test my_date:datetime prompt>rake db:migrate now edit your app/views/date_tests/edit.html.erb: <h1>Editing date_test</h1> <% form_for(@date_test) do |f| %> <%= f.error_messages %> <p> RIGHT!<br/> <%= text_field_tag @date_test, f.object.my_date %> </p> <p> WRONG!<br /> <%= f.text_field :my_date %> </p> <p> <%= f.submit 'Update' %> </p> <% end %> <%= link_to 'Show', @date_test %> | <%= link_to 'Back', date_tests_path %> now edit your config/environment.rb: #add this config.time_zone = 'Central Time (US & Canada)' This recreates the problem I am having in my actual app. The problem with my app is that I'm storing a date in a hidden field and rendering a "user friendly" version. Creating a resource works fine but as soon as I try to edit it the time changes (it adds the difference between my current time zone configuration and UTC). go to http://localhost:3000/date_tests/new and save the time then go to reedit it and you will have two different representations of the date/time one which will save incorrectly and the other that will.

    Read the article

  • Cannot connect to Github?

    - by user2973438
    so I tried to push some updates onto my repo on github via terminal on Mac OSX 10.8.4 and it doesn't work. I've been getting the same error many times: Lillys-MacBook-Air:Yuewei Lilly$ git push origin master error: Failed connect to github.com:443; Operation timed out while accessing https://github.com/lillybeans/Yuewei.git/info/refs?service=git-receive-pack fatal: HTTP request failed Some background: I've pushed many projects onto github before using terminal (when I was in Canada). I am currently in Shanghai, China, could it be the GFW? But when I was in Beijing, I was able to push projects onto github still. when I do ping github.com: Lillys-MacBook-Air:Yuewei Lilly$ ping github.com PING github.com (192.30.252.131): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 ping: sendto: No route to host Request timeout for icmp_seq 2 ping: sendto: Host is down Request timeout for icmp_seq 3 ping: sendto: Host is down Request timeout for icmp_seq 4 ping: sendto: Host is down Request timeout for icmp_seq 5 ping: sendto: Host is down Request timeout for icmp_seq 6 ping: sendto: Host is down Request timeout for icmp_seq 7 ^C --- github.com ping statistics --- 9 packets transmitted, 0 packets received, 100.0% packet loss Lillys-MacBook-Air:Yuewei Lilly$ I have ShadowSocks (proxy) turned on. Without it I can't access github.com via browser, with it, I can. also when I do "git remote -v" I see both my pull and push remote repos correctly listed. Thank you in advance!

    Read the article

  • Getting visitors country from their IP

    - by Ali Abdulkarim Salem
    i want to get visitors country via their IP.. right now I'm using this ( http://api.hostip.info/country.php?ip=...... ) here's my code <?php if (isset($_SERVER['HTTP_CLIENT_IP'])) { $real_ip_adress = $_SERVER['HTTP_CLIENT_IP']; } if (isset($_SERVER['HTTP_X_FORWARDED_FOR'])) { $real_ip_adress = $_SERVER['HTTP_X_FORWARDED_FOR']; } else { $real_ip_adress = $_SERVER['REMOTE_ADDR']; } $cip = $real_ip_adress; $iptolocation = 'http://api.hostip.info/country.php?ip=' . $cip; $creatorlocation = file_get_contents($iptolocation); ?> Well, it's working properly, but the thing is, this returns the country code like US or CA., and not the whole country name like United States or Canada So, is there any good alternative to hostip.info offers this? I know that I can just write some code that will eventually turn this two letters to whole country name, but I'm just too lazy to write a code that contains all countries... P.S: For some reason I don't want to use any ready made CSV file or any code that will grab this information for me, something like ip2country ready made code and CSV.

    Read the article

  • Database (MySQL) structuring: pros and cons of multiple tables

    - by Gideon
    I am collecting data and storing it MySQL, for: 75 variables 55 countries Each year I have, at this stage since I am building this tool created a single table, of variables / countries (storing 1 year worth of data). Next year (and for several years after that) a new set of data will be input for each country. There are therefore 3 variables in controlling data returned to a user reviewing all collected data. The general form of any query would be: Show me these specifics variables, for these specific countries, for these specific years. (Show me average age and weight, for USA and Canada, for 2012 and 2009, for example) My question is, it seems that I have two options for arranging this data: -Multiple tables where I create a table of country / variable for each year data is collected - Single table and simply add a column (field) for the year that data relates to. As far as I can tell I could make these database calls with either sructure, but is one more powerful / efficient / quicker, and why? Thanks for your consideration. It's a PDO / PHP interface if that is relevent.

    Read the article

  • Pros/Cons of switching from Exchange to GMail

    - by Brent
    We are a medium-large non-profit company, with around 1000 staff and volunteers, and have been using MS Exchange (currently 2003) for our mail system for years. I recently attended a Google conference where they were positing that "Cloud computing is the way of the future", and encouraging us to switch from doing our own email with Exchange, to using GMail and Google Apps for everything. Additionally, one of our departments has been pushing from inside to do this transition within their own department, if not throughout the entire organization. I can definitely see some benefits - such as: Archive space - we never seem to have the space our users want, and of course, the more we get, the more we have to back up OS Agnostic - Exchange is definitely built for windows, and with mac and linux users on the rise, these users increasingly demand better tools / support. Google offers this. Better archiving - potential of e-discovery, that doesn't exist in a practical way with our current setup. Switching would relieve us of a fair bit of server administration, give more options to our end users, and free up the server resources we are now using for Exchange. Our IT department wants to be perceived as providing up-to-date solutions to technical problems, and this change would definitely provide such an image. Google's infrastructure is obviously much more robust than ours, and they employ some of the world's best security and network experts. However, there are also some serious drawbacks: We would be essentially outsourcing one of our mission-critical systems to a 3rd party The switch would inevitably involve Google Apps and perhaps more as well. That means we would have a-lot more at the mercy of a single (potentially weak) password. (is there a way to make this more secure using a password plus physical key of some sort??) Our data would not remain under our roof - or even in our country (Canada). This obviously has plusses on the Disaster Recovery side, but I think there are potential negatives on the legal side. I can't imagine that somebody as large as Google would be as responsive as we would want with regard to non-critical issues such as tracing missing emails, etc. (not sure how much access we would have to basic mail logs - for instance) Can anyone help me evaluate this decision? What issues am I overlooking? What experiences have you had with this transition (or the opposite - gmail to Exchange) Can you add to the points I have already outlined?

    Read the article

  • Computer suddenly dies; screen displays weird flickering lines, then restarts

    - by Imray
    I've been having this terrible problem for a little while and just managed to get a picture of 'dead screen' for the first time and I am posting it to seek help. Randomly, at irregular intervals (typically once a week), while working on something (it's been different things every time) my computer will just suddenly go dead - the screen turns to exactly the picture below (the lines flicker a little bit), it hangs there for a few seconds and then restarts. Obviously this is extremely frustrating and I want to try to stop it. I've searched numerous postings with similar keywords but nothing exactly the same as mine. Does anyone have any idea what might be the cause of this? I would post all my system settings and installed programs but the list is long and I don't know how much relevance each item would be. If you'd like to know something specific, please comment and I'll let you know whatever you need. SPECS C:\Users\Imray>systeminfo Host Name: Imray OS Name: Microsoft Windows 7 Professional OS Version: 6.1.7600 N/A Build 7600 OS Manufacturer: Microsoft Corporation OS Configuration: Standalone Workstation OS Build Type: Multiprocessor Free Registered Owner: Imray - Owner Registered Organization: Product ID: 00371-152-9333854-85895 Original Install Date: 06/09/1999, 5:45:21 PM System Boot Time: 22/03/2013, 8:58:18 AM System Manufacturer: Gateway System Model: DX4840 System Type: x64-based PC Processor(s): 1 Processor(s) Installed. [01]: Intel64 Family 6 Model 37 Stepping 2 GenuineIntel ~3201 Mhz BIOS Version: American Megatrends Inc. P01-A3 , 17/05/2010 Windows Directory: C:\Windows System Directory: C:\Windows\system32 Boot Device: \Device\HarddiskVolume2 System Locale: en-us;English (United States) Input Locale: en-us;English (United States) Time Zone: (UTC-05:00) Eastern Time (US & Canada) Total Physical Memory: 6,135 MB Available Physical Memory: 3,632 MB Virtual Memory: Max Size: 12,268 MB Virtual Memory: Available: 8,114 MB Virtual Memory: In Use: 4,154 MB Page File Location(s): C:\pagefile.sys Domain: WORKGROUP Logon Server: \\Imray-OWNER Hotfix(s): 4 Hotfix(s) Installed. [01]: KB971033 [02]: KB958559 [03]: KB977206 [04]: KB981889 Network Card(s): 2 NIC(s) Installed. [01]: 802.11n Wireless PCI Express Card LAN Adapter Connection Name: Wireless Network Connection DHCP Enabled: Yes DHCP Server: 192.168.2.1 IP address(es) [01]: 192.168.2.13 [02]: fe80::1df1:5399:6890:91f6 [02]: Microsoft Virtual WiFi Miniport Adapter Connection Name: Wireless Network Connection 2 DHCP Enabled: Yes DHCP Server: N/A IP address(es) Graphics Card Specs Name ATI Radeon HD 5570 PNP Device ID PCI\VEN_1002&DEV_68D9&SUBSYS_E142174B&REV_00\4&18A4B35E&0&0008 Adapter Type ATI display adapter (0x68D9), ATI Technologies Inc. compatible Adapter Description ATI Radeon HD 5570 Adapter RAM 1.00 GB (1,073,741,824 bytes) Installed Drivers atiu9p64 aticfx64 aticfx64 atiu9pag aticfx32 aticfx32 atiumd64 atidxx64 atidxx64 atiumdag atidxx32 atidxx32 atiumdva atiumd6a atitmm64 Driver Version 8.700.0.0 INF File oem1.inf (ati2mtag_Evergreen section) Color Planes Not Available Color Table Entries 4294967296 Resolution 1920 x 1080 x 59 hertz Bits/Pixel 32 Memory Address 0xD0000000-0xDFFFFFFF Memory Address 0xFBDE0000-0xFBDFFFFF I/O Port 0x0000D000-0x0000DFFF IRQ Channel IRQ 4294967293 I/O Port 0x000003B0-0x000003BB I/O Port 0x000003C0-0x000003DF Memory Address 0xA0000-0xBFFFF Driver c:\windows\system32\drivers\atikmpag.sys (8.14.1.6095, 181.00 KB (185,344 bytes), 06/09/1999 5:59 PM)

    Read the article

  • Upcoming EMEA, APAC & US Events with MySQL in 2014

    - by Lenka Kasparova
    As an update to the previous announcement from Mar 25, 2014 please find below the updated list of events where MySQL Community team is attending and/or supporting. This time you can find not only EMEA & APAC ones but also conferences & events we are covering in the US & Canada. You are invited to meet our engineers at the events below.   EMEA  NEW!! BGOUG, Sandanski, Bulgaria, June 13, 2014  Georgi Kodinov will attend and speak at this local Oracle User Group event. Feel free to come. PHP Tour Lyon, Lyon, France, June 23-24, 2014 MySQL team is going to be part of this show as well, we are not going to have a booth here but very active networking by our french MySQL team around the event. Come to meet us and talk to us! NEW!! Converge Conference, Glasgow, Scotland, August 15-16, 2014  MySQL Community Manager, David Stokes attends with MySQL talk. NEW!! CakeFest, Madrid, Spain, August 21-24, 2014  A talk on "Scaling Your MySQL instances AND keeping your Sanity" will be given by the MySQL Community Manager, David Stokes. Froscon 2014, St.Augustin, Germany, August 23-24, 2014 Please visit our booth as well as watch the Froscon website for the schedule updates. NEW!! SymfonyLive, UK, London, September 25-26, 2014 MySQL Community Magers, David Stokes & Morgan Tocker submitted MySQL talks for this show. Schedule will be announced later on. DrupalCon Amsterdam, The Netherlands, September 29-Oct 3, 2014 Meet us at our booth at DrupalCon Amsterdam. For the schedule please watch the DrupalCon website. All Your Base, Oxford UK, October 17, 2014  Come to visit our MySQL booth and talk to our MySQL experts. NEW!! WebTechCon / IPC, Munich Germany, October 26-29, 2014 NEW!! DOAG, Nuremberg, Germany, November 18-20, 2014 There will be a full day of MySQL talks and one full day of MySQL workshop & sessions with live demo. This event is simply hard to miss! NEW!! Forum PHP Paris, France, November 21-22, 2014 More details: TBD NEW!! UK OUG, Liverpool, UK, December 8-10, 2014 MySQL will be part of the Oracle booth and we hope to get more space for MySQL talks.  USA NEW!! Texas Linux Fest, Austin, Texas, US, June 13-14, 2014 NEW!! SouthEast Linux Fest, Charlotte, US, June 20-22, 2014 NEW!! Debian Conference 2014, Portland, OR, US, August 23-31, 2014 NEW!! FossetCon, Orlando, US, September 11-13, 2014 NEW!! Oracle Open World, San Francisco, US, September 29-October 3, 2014 NEW!! MySQL Central @ Open/World, San Francisco, US, September 29-October 3, 2014 NEW!! PyTexas 2014, Dallas, TX, US, October 3-5, 2014 NEW!! All Things Open (replacing POSSCON), Raleigh, NC, October 23-24, 2014 NEW!! Ohio LinuxFest 2014, Columbus, Ohio, US, October 24-25, 2014 NEW!! ZendCon PHP, Santa Clara, US, October 27-30, 2014 NEW!! Kuali Days 2014, Indianapolis, US, November 10-13, 2014 NEW!! Live 360, Orlando, FL, US, November 17-20, 2014 APAC OpenSourceConference Japan, Hokkaido, June 13-14, 2014 MySQL is represented by Ryusuke Kajiyama with the talk on "MySQL Technology Updates". NEW!! db tech showcase, Osaka Japan, June 18-20, 2014 Three MySQL talks are scheduled for this show, "MySQL for Oracle DBA" & "MySQL Technology Updates" by Ryusuke Kajiyama. The last talk will be on MySQL Fabric by Yoshiaki Yamasaki. NEW!! PyCon Singapore, Singapore, June 18-20, 2014 Ryusuke Kajiyama will be talking about "Sharding and scale-out using Python-based MySQL Fabric". NEW!! COSCUP, Taipei, Taiwan, July 19-20, 2014 We are going to run a technical session on MySQL Workbench & one talk on how to make MySQL better MySQL. NEW!! PyCon New Zealand, Wellington, New Zealand, September 13-14, 2014 MySQL talks were submitted as well as one talk by Solaris Modernization team on Python & Solaris, watch the website for schedule updates. NEW!! PyCon Japan, Tokyo Japan, September 13-15, 2014 MySQL will be a MySQL session speaker, no schedule is announced yet. Ruby Kaigi, Tokyo, Japan, September 18-20, 2014 Another event MySQL supports and attends in APAC region. Ruby Kaigi is the international Ruby Conference in Japan, Tokyo. Ruby started in Japan, so Ruby Kaigi has excellent speakers and developers! MySQL team is going to be present at this conference with MySQL talks and active networking around the venue. NEW!! PyCon India, Bangalore, India, September 26-28, 2014 A MySQL talk on "MySQL Utilities scaling MySQL with Python" has been submitted, please watch the PyCon website for the schedule updates. NEW!! OpenSourceConference Japan, Tokyo, October 18-19, 2014 NEW!! OpenSource India, Bengaluru, India, November 7-8, 2014 NEW!! OpenSourceConference Japan, Fukuoka, November 14-15, 2014 You can check the MySQL wikis for updates on the conferences we are attending. Next time I hope to have more details for each event above (especially for the US ones).

    Read the article

  • SQLAuthority News – Technical Review of Learning at Koenig Solutions

    - by pinaldave
    Yesterday I finished my 3 days fast track in person learning of course End to End SQL Server Business Intelligence at Koenig Solutions. You can read my previous article over here regarding why am I learning SQL Server. Yesterday I blogged about my experience of arriving to Training Center and my induction with the center. The Training Days I had enrolled for three days training so my routine each of the three days was very much same. However, the content every day was different as I was learning something new every day. Let me describe a few of the interesting details of my daily routine. A Single Student Batch The best part of my training was that in my training batch, I am single student. Koenig is known to smaller batches and often they have single student batches as well. I was very much delighted to know that I will have dedicated access and attention from my trainer in my batch as I will be single student in my batch. In most of the labs I have observed there are no more than 4 students at any time. Prakash and Pinal 7:30 AM Breakfast Talk We all students gather at 7:30 in breakfast area. The best time of the day. I was the only Indian student in the group. The other students were from USA, Canada, Nigeria, Bhutan, Tanzania, and a few others from other countries. I immediately become the source of information and reference manual. Though the distance between Delhi and Bangalore is 2000+ KM I was considered as a local guy. 8:30 AMHeading to Training Center Every day without fail at 8:30 the van started from our accommodation to the training center. As mentioned in an earlier blog post the distance is about 5 minutes and we were able to reach at the location before 8:45. This gave us some time settle in before our class starts at 9:00 AM. 9:00 AM Order Lunch Food Well it may sound funny that we just had breakfast 30 minutes but the first thing everybody has to do is to order lunch as soon as the class starts. There is an online training portal to order food for the day. Everybody has to place their order early during the day so the food arrives on time during lunch time. Everybody can order whatever they want to order using an online ordering system. The options are plenty and everybody can order what they like. 9:05 AM Learning Starts After deciding the lunch we started the learning. I was very fortunate to have a very experienced trainer - Prakash Chheatry. Though I have never met him before I have heard a lot about Prakash. He is known as the top most SQL Server Trainer in India. His student list contains some of the very well known SQL Server Experts of the world and few of SQL Server “best seller” book authors. Learning continues till 1:00 PM with one tea-coffee break in between. 1:00 PM Lunch The lunch time is again the fun time. We all students get together in the afternoon and tell the stories of the world. Indeed the best part of the day beside learning new stuff. 4:55 PM Ready to Return We stop at 4:55 as at precisely 5:00 PM the van stops by the institute which takes us back to our accommodation. Trust me seriously long long day always but the amount of the learning is the win of the day. 7:30 PM Dinner Time After coming back to the accommodation I study till 7:30 and then rush for dinner. Dinner is world cuisine and deserts are really delicious. After dinner every day I have written a blog and retired early as the next day is always going to be busier than the present day. What did I learn As I mentioned earlier I know SQL Server fairly well. I had expressed the same in my conversation as well. This is the reason I was assigned a fairly senior trainer and we learned everything quite quickly. As I know quite a few things we went pretty fast in many topics. There were a few things, I wanted to learn in detail as well practice on the labs. We slowed down where we wanted and rush through the concepts where I was very comfortable. Here is the list of the things which we covered in action pack three days. Introduction to Business Intelligence (Intro) SQL Server Analysis Service (Theory and Lab) SQL Server Integration Service  (Theory and Lab) SQL Server Reporting Service  (Theory and Lab) SQL Server PowerPivot (Lab) UDM (Theory) SharePoint Concepts (Theory) Power View (Demo) Business Intelligence and Security (Discussion) Well, I was delighted that I was able to refresh lots of concepts during these three days. Thanks to my trainer and my friend who helped me to have a good learning experience. I believe all the learning  will help me in my growth and future career. With this I end my this experience. I am planning to have another online learning experience later this month. I will blog about my experience as I begin it. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • Pella Increases Online Appointment Scheduling and Rapidly Personalizes and Updates Marketing Initiatives

    - by Michael Snow
    Originally posted on Oracle Customers page.Oracle Customer: Pella CorporationLocation:  Pella, IowaIndustry: Industrial Manufacturing Employees:  7,100 Pella Corporation is an innovative leader in creating a better view for homes and businesses by designing, testing, manufacturing, and installing quality windows and doors for new construction, remodeling, and replacement applications. A family-owned company, Pella has an 88-year history of innovation and, today, is the second-largest manufacturer in the country of windows and doors, including patio, entry, and storm doors. The company has 10 manufacturing facilities in United States and window and door showrooms across the United States and Canada. In-home consultations are an important part of Pella’s sales process. Several years ago, the company launched an online appointment scheduling tool to improve customer convenience. While the functionality worked well, the company wanted to increase online conversion rates and decrease the number of incomplete, online appointment schedules. It also wanted to give its business analysts and other line-of-business personnel the ability to update the scheduling tool and interface quickly, without needing IT team intervention and recoding, to better capitalize on opportunities and personalize the interface for specific markets. Pella also looked to reduce IT complexity by selecting a system that integrated easily with its Oracle E-Business Suite Release 12.1 enterprise applications.Pella, which has a large Oracle footprint, selected Oracle WebCenter Sites as the foundation for its new, real-time appointment scheduling application. It used the solution to re-engineer the scheduling process and the information required to set up an appointment. Just a few months after launch, it is seeing improvement in the number of appointments booked online and experiencing fewer abandoned appointments during the scheduling process. As important, Pella can now quickly and easily make changes to images, video, and content displayed on the scheduling tool interface, delivering greater business agility. Previously, such changes required a developer and weeks of coding and testing. Today, a member of Pella’s business analyst team can complete the changes in hours. This capability enables Pella to personalize the Web experience for customers. For example, it can display different products or images for clients in different regions.The solution is also highly scalable. Pella is using Oracle WebCenter Sites for appointment scheduling now and plans to migrate Pella.com, its configurator tool, and dealer microsites onto the platform. Further, Pella plans to leverage the solution to optimize mobile devices. “Moving ahead, we expect to extensively leverage Oracle WebCenter Sites to gain greater flexibility in updating the Web experience, thanks to the ability to make updates quickly without developer resources. Segmentation and targeting capabilities will allow us to create a more personalized experience across both traditional and mobile platforms,” said Teri Lancaster, IT manager, customer experience applications, Pella Corporation. A word from Pella Corporation "Oracle WebCenter Sites?from the start?delivered important benefits. We’ve redesigned the online scheduling process and are seeing more potential customers completing consultation bookings online. More important, the solution opens a world of other possibilities as we plan to migrate Pella.com and our dealer microsites to the platform, and leverage it to optimize the Web experience for our mobile devices.” – Teri Lancaster, IT Manager, Customer Experience Applications, Pella Corporation Oracle Product and Services Oracle WebCenter Sites Why Oracle Pella has a long-standing relationship with Oracle. “We look to Oracle first for a solution. Our Oracle account team came to us with several solutions, and Oracle WebCenter Sites delivered the scalability, ease-of-use, flexibility, and scalability that we required for the appointment scheduling initiative and other Web projects on the horizon, including migrating Pella.com and optimizing our site for mobile platforms,”said Teri Lancaster, IT manager, customer experience applications, Pella Corporation. Implementation Process The Pella implementation team, working with Oracle partner Element Solutions, LLC, integrated the appointment setting application with Pella.com as well as the company’s Oracle E-Business Suite customer relationship management applications. Using Oracle WebCenter Site’s development tools and subversion capabilities to develop the application, the Element Solutions and Pella teams could work remotely and collaboratively, accelerating deployment. Pella went live with the new scheduling tool in just six months. Partner Oracle PartnerElement Solutions, LLC Element Solutions was instrumental at every major stage of the project, including design creation and approval, development, training, and rollout. “Element Solutions was a vital partner for our Oracle WebCenter Sites initiative. The team provided guidance, and more important, critical knowledge transfer at every stage?which equipped us to get the most out of this powerful and versatile solution. We were definitely collaboration partners,” Lancaster said. Resources Pella Corporation Upgrades Enterprise Applications to Continue to Improve Manufacturing Efficiency Thousands of Customers Successfully and Smoothly Upgrade to Oracle E-Business Suite 12.1 for New Functionality, Lower Operating Costs and Improved Shared Operations Managing the Virtual World

    Read the article

  • What's going on with INETA and the Regional Speakers Bureau?

    - by Chris Williams
    For those of you that have been waiting patiently (and not so patiently) I'm happy to say that we're very near completion on some changes/enhancements/improvements that will allow us to finally go live with the INETA Regional Speakers Bureau. I know quite a few of you have already registered, which is great (though some of you may need to come back and update your info) and we've had a few folks submit requests, mostly in a test capacity, but soon we'll be up and live. Here's how it breaks down. Be sure to read this, because things have changed a bit from when we initially announced it. 1. The majority of our speaker/event funding is going into the Regional Speakers Bureau.  The National Bureau still exists, but it's a good bit smaller than it was before, and it's not an "every group" benefit anymore. We'll be using the National Bureau as more of a strategic task force, targeting high impact events and areas that need some community building love from INETA. These will be identified and handled on a case by case basis, and may include more than just user group events. 2. You're going to get more events per group, per year than you did before. Not only are we focusing more resources on this program, but we're also making a lot of efforts to use it more effectively. With the INETA Regional Speakers Bureau, you should be able to get 2-3 INETA speakers per year, on average. Not every geographical area will have exactly the same experience, but we're doing the best we can. 3. It's not a farm team program for the National Bureau. Unsurprisingly, I managed to offend a number of people when I previously made the comment that the Regional Speakers Bureau program was a farm team or stepping stone to the National Bureau. It was a poor choice of words.  Anyone can participate in the Regional Speakers Bureau, and I look forward to working with all of you. 4. There is assistance for your efforts. The exact final details are still being hammered out, but expect it to look something like this: (all distances listed are based on a round trip) Distances < 120 miles = $0 121 miles - 240 miles = $50 (effectively 1 to 2 hours, each way) 241 miles - 360 miles = $100 (effectively 2 to 3 hours, each way) 361 miles - 480 miles = $200 (effectively 3 to 4 hours, each way) For those of you who travel a lot, we're working on a solution to handle group visits when you're away from home. These will (for now) be handled on a case by case basis. 5. We're going to make it as easy as possible to work with the program. In order to do this, we need a few things from you. For speakers, that means your home address. It also means (maybe) filling out a simple 1 line expense report via the INETA website. For user groups, it means making sure your meeting address is up to date as well. 6. Distances will be automatically calculated from your home of record to the user group event and back. We realize that this is not a perfect solution to every instance, but we're not paying you to speak at an event, and you won't be taxed on this money. It's simply some assistance to make your community efforts easier. Our way of saying thanks for everything you do. 7. Sounds good so far, what's the catch? There's always a catch, right? In this case there are two of them: 1) At this time, Microsoft employees are welcome to use the website to line up speaking engagements with user groups, but are not eligible for financial assistance. 2) Anyone can register and use the website to line up speaking engagements with user groups, however you must receive and maintain a net score of 3+ positive ratings (we're implementing a thumbs up / thumbs down system) in order to receive financial assistance. These ratings are provided by the User Group leaders after the meeting has taken place. 8. Involvement by the User Group leaders is a key factor in the success of this program. Your job isn't done once you request a speaker. After you've had your meeting, it's critical that you go back to the website and take a very small survey. Doing this ensures that the speaker gets rated (and compensated if eligible) and also ensures that you can make another request, since you won't be able to make a new request if you have an old one outstanding. 9. What about Canada? We're definitely working on that. Unfortunately nothing new to report on that front, other than to say that we're trying. So... this is where things stand currently. We're working very quickly to get this in place and get speakers and groups together. If you have any questions, please leave a comment below and I'll answer them as quickly as possible. If I've forgotten anything, or if things change, I'll update it here. Thanks, Chris G. Williams INETA Board of Directors

    Read the article

  • SUPINFO International University in Mauritius

    Since a while I'm considering to pick up my activities as a student and I'd like to get a degree in Computer Science. Personal motivation I mean after all this years as a professional software (and database) developer I have the personal urge to complete this part of my education. Having various certifications by Microsoft and being awarded as an Microsoft Most Valuable Professional (MVP) twice looks pretty awesome on a resume but having a "proper" degree would just complete my package. During the last couple of years I already got in touch with C-SAC (local business school with degree courses), the University of Mauritius and BCS, the Chartered Institute for IT to check the options to enroll as an experienced software developer. Quite frankly, it was kind of alienating to receive that feedback: Start from scratch! No seriously? Spending x amount of years to sit for courses that might be outdated and form part of your daily routine? Probably being in an awkward situation in which your professional expertise might exceed the lecturers knowledge? I don't know... but if that's path to walk... Well, then I might have to go for it. SUPINFO International University Some weeks ago I was contacted by the General Manager, Education Recruitment and Development of Medine Education Village, Yamal Matabudul, to have a chat on how the local IT scene, namely the Mauritius Software Craftsmanship Community (MSCC), could assist in their plans to promote their upcoming campus. Medine went into partnership with the French-based SUPINFO International University and Mauritius will be the 36th location world-wide for SUPINFO. Actually, the concept of SUPINFO is very likely to the common understanding of an apprenticeship in Germany. Not only does a student enroll into the programme but will also be placed into various internships as part of the curriculum. It's a big advantage in my opinion as the person stays in touch with the daily procedures and workflows in the real world of IT. Statements like "We just received a 'crash course' of information and learned new technology which is equivalent to 1.5 months of lectures at the university" wouldn't form part of the experience of such an education. Open Day at the Medine Education Village Last Saturday, Medine organised their Open Day and it was the official inauguration of the SUPINFO campus in Mauritius. It's now listed on their website, too - but be warned, the site is mainly in French language although the courses are all done in English. Not only was it a big opportunity to "hang out" on the campus of Medine but it was great to see the first professional partners for their internship programme, too. Oh, just for the records, IOS Indian Ocean Software Ltd. will also be among the future employers for SUPINFO students. More about that in an upcoming blog entry. Open Day at Medine Education Village - SUPINFO International University in Mauritius Mr Alick Mouriesse, President of SUPINFO, arrived the previous day and he gave all attendees a great overview of the roots of SUPINFO, the general development of the educational syllabus and their high emphasis on their partnerships with local IT companies in order to assist their students to get future jobs but also feel the heartbeat of technology live. Something which is completely missing in classic institutions of tertiary education in Computer Science. And since I was on tour with my children, as usual during weekends, he also talked about the outlook of having a SUPINFO campus in Mauritius. Apart from the close connection to IT companies and providing internships to students, SUPINFO clearly works on an international level. Meaning students of SUPINFO can move around the globe and can continue their studies seamlessly. For example, you might enroll for your first year in France, then continue to do 2nd and 3rd year in Canada or any other country with a SUPINFO campus to earn your bachelor degree, and then live and study in Mauritius for the next 2 years to achieve a Master degree. Having a chat with Dale Smith, Expand Technologies, after his interesting session on Technological Entrepreneurship - TechPreneur More questions by other craftsmen of the Mauritius Software Craftsmanship Community And of course, this concept works in any direction, giving Mauritian students a huge (!) opportunity to live, study and work abroad. And thanks to this, Medine already announced that there will be new facilities near Cascavelle to provide dormitories and other facilities to international students coming to our island. Awesome! Okay, but why SUPINFO? Well, coming back to my original statement - I'd like to get a degree in Computer Science - SUPINFO has a process called Validation of Acquired Experience (VAE) which is tailor-made for employees in the field of IT, and allows you to enroll in their course programme. I already got in touch with their online support chat but was only redirected to some FAQs on their website, unfortunately. So, during the Open Day I seized the opportunity to have an one-on-one conversation with Alick Mouriesse, and he clearly encouraged me to gather my certifications and working experience. SUPINFO does an individual evaluation prior to their assignment regarding course level, and hopefully my chances of getting some modules ahead of studies are looking better than compared to the other institutes. Don't get me wrong, I don't want to go down the easy route but why should someone sit for "Database 101" or "Principles of OOP" when applying and preaching database normalisation and practicing Clean Code Developer are like flesh and blood? Anyway, I'll be off to get my transcripts of certificates together with my course assignments from the old days at the university. Yes, I studied Applied Chemistry for a couple of years before intersecting into IT and software development particularly... ;-)

    Read the article

  • Productivity vs Security [closed]

    - by nerijus
    Really do not know is this right place to ask such a questions. But it is about programming in a different light. So, currently contracting with company witch pretends to be big corporation. Everyone is so important that all small issues like developers are ignored. Give you a sample: company VPN is configured so that if you have VPN then HTTP traffic is banned. Bearing this in mind can you imagine my workflow: Morning. Ok time to get latest source. Ups, no VPN. Let’s connect. Click-click. 3 sec. wait time. Ok getting source. Do I have emails? Ups. VPN is on, can’t check my emails. Need to wait for source to come up. Finally here it is! Ok Click-click VPN is gone. What is in my email. Someone reported a bug. Good, let’s track it down. It is in TFS already. Oh, dam, I need VPN. Click-click. Ok, there is description. Yea, I have seen this issue in stachoverflow.com. Let’s go there. Ups, no internet. Click-click. No internet. What? IPconfig… DHCP server kicked me out. Dam. Renew ip. 1..2..3. Ok internet is back. Google: site: stachoverflow.com 3 min. I have solution. Great I love stackoverflow.com. Don’t want to remember days where there was no stackoveflow.com. Ok. Copy paste this like to studio. Dam, studio is stalled, can’t reach files on TFS. Click-click. VPN is back. Get source out, paste my code. Grand. Let’s see what other comments about an issue in stackoverflow.com tells. Hmm.. There is a link. Click. Dammit! No internet. Click-click. No internet. DHCP kicked me out. Dammit. Now it is even worse: this happens 3-4 times a day. After certain amount of VPN connections open\closed my internet goes down solid. Only way to get internet back is reboot. All my browser tabs/SQL windows/studio will be gone. This happened just now when I am typing this. Back to issue I am solving right now: I am getting frustrated - I do not care about better solution for this issue. Let’s do it somehow and forget. This Click-click barrier between internet and TFS kills me… Sounds familiar? You could say there are VPN settings to change. No! This is company laptop, not allowed to do changes. I am very very lucky to have admin privileges on my machine. Most of developers don’t. So just learned to live with this frustration. It takes away 40-60 minutes daily. Tried to email company support, admins. They are too important ant too busy with something that just ignored my little man’s problem. Politely ignored. Question is: Is this normal in corporate world? (Have been in States, Canada, Germany. Never seen this.)

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • 7-Eleven Improves the Digital Guest Experience With 10-Minute Application Provisioning

    - by MichaelM-Oracle
    By Vishal Mehra - Director, Cloud Computing, Oracle Consulting Making the Cloud Journey Matter There’s much more to cloud computing than cutting costs and closing data centers. In fact, cloud computing is fast becoming the engine for innovation and productivity in the digital age. Oracle Consulting Services contributes to our customers’ cloud journey by accelerating application provisioning and rapidly deploying enterprise solutions. By blending flexibility with standardization, our Middleware as a Service (MWaaS) offering is ensuring the success of many cloud initiatives. 10-Minute Application Provisioning Times at 7-Eleven As a case in point, 7-Eleven recently highlighted the scope, scale, and results of a cloud-powered environment. The world’s largest convenience store chain is rolling out a Digital Guest Experience (DGE) program across 8,500 stores in the U.S. and Canada. Everyday, 7-Eleven connects with tens of millions of customers through point-of-sale terminals, web sites, and mobile apps. Promoting customer loyalty, targeting promotions, downloading digital coupons, and accepting digital payments are all part of the roadmap for a comprehensive and rewarding customer experience. And what about the time required for deploying successive versions of this mission-critical solution? Ron Clanton, 7-Eleven's DGE Program Manager, Information Technology reported at Oracle Open World, " We are now able to provision new environments in less than 10 minutes. This includes the complete SOA Suite on Exalogic, and Enterprise Manager managing both the SOA Suite, Exalogic, and our Exadata databases ." OCS understands the complex nature of innovative solutions and has processes and expertise to help clients like 7-Eleven rapidly develop technology that enhances the customer experience with little more than the click of a button. OCS understood that the 7-Eleven roadmap required careful planning, agile development, and a cloud-capable environment to move fast and perform at enterprise scale. Business Agility Today’s business-savvy technology leaders face competing priorities as they confront the digital disruptions of the mobile revolution and next-generation enterprise applications. To support an innovation agenda, IT is required to balance competing priorities between development and operations groups. Standardization and consolidation of computing resources are the keys to success. With our operational and technical expertise promoting business agility, Oracle Consulting's deep Middleware as a Service experience can make a significant difference to our clients by empowering enterprise IT organizations with the computing environment they seek to keep up with the pace of change that digitally driven business units expect. Depending on the needs of the organization, this environment runs within a private, public, or hybrid cloud infrastructure. Through on-demand access to a shared pool of configurable computing resources, IT delivers the standard tools and methods for developing, integrating, deploying, and scaling next-generation applications. Gold profiles of predefined configurations eliminate the version mismatches among databases, application servers, and SOA suite components, delivered both by Oracle and other enterprise ISVs. These computing resources are well defined in business terms, enabling users to select what they need from a service catalog. Striking the Balance between Development and Operations As a result, development groups have the flexibility to choose among a menu of available services with descriptions of standard business functions, service level guarantees, and costs. Faced with the consumerization of enterprise IT, they can deliver the innovative customer experiences that seamlessly integrate with underlying enterprise applications and services. This cloud-powered development and testing environment accelerates release cycles to ensure agile development and rapid deployments. At the same time, the operations group is relying on certified stacks and frameworks, tuned to predefined environments and patterns. Operators can maintain a high level of security, and continue best practices for applications/systems monitoring and management. Moreover, faced with the challenges of delivering on service level agreements (SLAs) with the business units, operators can ensure performance, scalability, and reliability of the infrastructure. The elasticity of a cloud-computing environment – the ability to rapidly add virtual machines and storage in response to computing demands -- makes a difference for hardware utilization and efficiency. Contending with Continuous Change What does it take to succeed on the promise of the cloud? As the engine for innovation and productivity in the digital age, IT must face not only the technical transformations but also the organizational challenges of the cloud. Standardizing key technologies, resources, and services through cloud computing is only one part of the cloud journey. Managing relationships among multiple department and projects over time – developing the management, governance, and monitoring capabilities within IT – is an often unmentioned but all too important second part. In fact, IT must have the organizational agility to contend with continuous change. This is where a skilled consulting services partner can play a pivotal role as a trusted advisor in the successful adoption of cloud solutions. With a lifecycle services approach to delivering innovative business solutions, Oracle Consulting Services has expertise and a portfolio of services to help enterprise customers succeed on their cloud journeys as well as other converging mega trends .

    Read the article

  • How to get dropdown value using jsp:useBean and jsp:setProperty?

    - by littlevahn
    I have a rather simple form in JSP that looks like this: <form action="response.jsp" method="POST"> <label>First Name:</label><input type="text" name="firstName" /><br> <label>Last Name:</label><input type="text" name="lastName" /><br> <label>Email:</label><input type="text" name="email" /><br> <label>Re-enter Email:</label><input type="text" name="emailRe" /><br> <label>Address:</label><input type="text" name="address" /><br> <label>Address 2:</label><input type="text" name="address2" /><br> <label>City:</label><input type="text" name="city" /><br> <label>Country:</label> <select name="country"> <option value="0">--Country--</option> <option value="1">United States</option> <option value="2">Canada</option> <option value="3">Mexico</option> </select><br> <label>Phone:</label><input type="text" name="phone" /><br> <label>Alt Phone:</label><input type="text" name="phoneAlt" /><br> <input type="submit" value="submit" /> </form> But when I try and access the value of the select box in my Java class I get null. Ive tried reading it in as a String and an Array of strings neither though seems to be grabbing the right value. The response.jsp looks like this: <%@ page language="java" %> <%@ page import="java.util.*" %> <%@page contentType="text/html" pageEncoding="UTF-8"%> <%! %> <jsp:useBean id="formHandler" class="validation.RegHandler" scope="request"> <jsp:setProperty name="formHandler" property="*" /> </jsp:useBean> <% if (formHandler.validate()) { %> <jsp:forward page="success.jsp"/> <% } else { %> <jsp:forward page="retryReg.jsp"/> <% } %> I already have Java script validation in place but I wanted to make sure I covered validation and checking for non-JS users. The RegHandler just uses the name field to refer to the value in the form. Any Idea how I could access the select box's value?

    Read the article

  • ASP.NET MVC Consume JSONResult in Bing Maps API

    - by rockinthesixstring
    I know there are a few topics on this, but I seem to be fumbling my way through with no results. I'm trying to use a controller to return JSON results to my Bin Maps functions. Here's what I have for my controller (yes it is properly returning JSON data. Function Regions() As JsonResult Dim rj As New List(Of RtnJson)() rj.Add(New RtnJson("135 Bow Meadows Drive, Cochrane, Alberta", "desc", "title")) rj.Add(New RtnJson("12 Bowridge Dr NW, Calgary, Alberta, Canada", "desc2", "title2")) Return Json(rj, JsonRequestBehavior.AllowGet) End Function Then in my script I have this, but it's not working. <script type="text/javascript"> var map = null; var centerLat = 51.045 ; var centerLon = -114.05722; var json_object = $.getJSON("<%: Url.Action("Regions", "Events")%>"); function LoadMap() { map = new VEMap('bingMap'); map.LoadMap(new VELatLong(centerLat, centerLon), 10); $.each(json_object, function () { alert(this.address); //this alert is returning "address is undefined" StartGeocoding(this.address, this.title, this.desc); }); } function StartGeocoding(address, title, desc) { map.Find(null, // what address, // where null, // VEFindType (always VEFindType.Businesses) null, // VEShapeLayer (base by default) null, // start index for results (0 by default) null, // max number of results (default is 10) null, // show results? (default is true) null, // create pushpin for what results? (ignored since what is null) true, // use default disambiguation? (default is true) false, // set best map view? (default is true) GeocodeCallback); // call back function } function GeocodeCallback(shapeLayer, findResults, places, moreResults, errorMsg) { var bestPlace = places[0]; // Add pushpin to the *best* place var location = bestPlace.LatLong; var newShape = new VEShape(VEShapeType.Pushpin, location); var desc = "Latitude: " + location.Latitude + "<br>Longitude:" + location.Longitude; newShape.SetDescription(desc); newShape.SetTitle(bestPlace.Name); map.AddShape(newShape); } $(document).ready(function () { LoadMap(); }); </script>

    Read the article

  • Non-linear regression models in PostgreSQL using R

    - by Dave Jarvis
    Background I have climate data (temperature, precipitation, snow depth) for all of Canada between 1900 and 2009. I have written a basic website and the simplest page allows users to choose category and city. They then get back a very simple report (without the parameters and calculations section): The primary purpose of the web application is to provide a simple user interface so that the general public can explore the data in meaningful ways. (A list of numbers is not meaningful to the general public, nor is a website that provides too many inputs.) The secondary purpose of the application is to provide climatologists and other scientists with deeper ways to view the data. (Using too many inputs, of course.) Tool Set The database is PostgreSQL with R (mostly) installed. The reports are written using iReport and generated using JasperReports. Poor Model Choice Currently, a linear regression model is applied against annual averages of daily data. The linear regression model is calculated within a PostgreSQL function as follows: SELECT regr_slope( amount, year_taken ), regr_intercept( amount, year_taken ), corr( amount, year_taken ) FROM temp_regression INTO STRICT slope, intercept, correlation; The results are returned to JasperReports using: SELECT year_taken, amount, year_taken * slope + intercept, slope, intercept, correlation, total_measurements INTO result; JasperReports calls into PostgreSQL using the following parameterized analysis function: SELECT year_taken, amount, measurements, regression_line, slope, intercept, correlation, total_measurements, execute_time FROM climate.analysis( $P{CityId}, $P{Elevation1}, $P{Elevation2}, $P{Radius}, $P{CategoryId}, $P{Year1}, $P{Year2} ) ORDER BY year_taken This is not an optimal solution because it gives the false impression that the climate is changing at a slow, but steady rate. Questions Using functions that take two parameters (e.g., year [X] and amount [Y]), such as PostgreSQL's regr_slope: What is a better regression model to apply? What CPAN-R packages provide such models? (Installable, ideally, using apt-get.) How can the R functions be called within a PostgreSQL function? If no such functions exist: What parameters should I try to obtain for functions that will produce the desired fit? How would you recommend showing the best fit curve? Keep in mind that this is a web app for use by the general public. If the only way to analyse the data is from an R shell, then the purpose has been defeated. (I know this is not the case for most R functions I have looked at so far.) Thank you!

    Read the article

  • JSP to Bean to Java class Validation

    - by littlevahn
    I have a rather simple form in JSP that looks like this: <form action="response.jsp" method="POST"> <label>First Name:</label><input type="text" name="firstName" /><br> <label>Last Name:</label><input type="text" name="lastName" /><br> <label>Email:</label><input type="text" name="email" /><br> <label>Re-enter Email:</label><input type="text" name="emailRe" /><br> <label>Address:</label><input type="text" name="address" /><br> <label>Address 2:</label><input type="text" name="address2" /><br> <label>City:</label><input type="text" name="city" /><br> <label>Country:</label> <select name="country"> <option value="0">--Country--</option> <option value="1">United States</option> <option value="2">Canada</option> <option value="3">Mexico</option> </select><br> <label>Phone:</label><input type="text" name="phone" /><br> <label>Alt Phone:</label><input type="text" name="phoneAlt" /><br> <input type="submit" value="submit" /> </form> But when I try and access the value of the select box in my Java class I get null. Ive tried reading it in as a String and an Array of strings neither though seems to be grabbing the right value. The response.jsp looks like this: <%@ page language="java" %> <%@ page import="java.util.*" %> <%@page contentType="text/html" pageEncoding="UTF-8"%> <%! %> <jsp:useBean id="formHandler" class="validation.RegHandler" scope="request"> <jsp:setProperty name="formHandler" property="*" /> </jsp:useBean> <% if (formHandler.validate()) { %> <jsp:forward page="success.jsp"/> <% } else { %> <jsp:forward page="retryReg.jsp"/> <% } %> I already have Java script validation in place but I wanted to make sure I covered validation and checking for non-JS users. The RegHandler just uses the name field to refer to the value in the form. Any Idea how I could access the select box's value?

    Read the article

  • How to save a ntfs partition which suddenly became empty

    - by SteveO
    One ntfs partition of my laptop was suddenly wiped out without any notice to me, when I rebooted from Windows 7 to Ubuntu 12.04 today. I am in need of help to save my files on that partition, which are important and unfortunately haven't been backed up yet. My laptop has two operating systems: Windows 7 and Ubuntu 12.04. with a ntfs partition shared between the two operating systems for storing some data files (109GB, about 97%of which has been used). I have almost always been using Ubuntu, but today I happened to have to work under Windows. Following is a record of what happened in the time order, numbering according to which operating system I was in at each stage. When I started into Windows 7, right before being able to log in, it took a while and two reboots to configure the Windows. I thought it was normal, since last time when I was using Windows two weeks ago, it took very long and several reboots to update Windows, since the last time I used Windows before then was in November last year. Then after finally being able to log in Windows 7, I installed Libre Office, MathType (I got it from http://dl.portablesoft.org/down/?id=2515, which I originally thought was a trial version, but later I learned was a cracked version and felt wrong. I made a copy of it at dropbox http://dl.dropbox.com/u/13029929/MathType_6.8_PortableSoft.rar, not for distributing it but to list it there just in case it will help to identify the problem), and MikTex. I then edited some .doc files in the ntfs partition under both Microsoft Office with MathType, and Libre Office. When I finished working under Windows and rebooted into Ubuntu, Ubuntu did some filesystem checking and reported that the ntfs partition was not able to be mounted. Then I rebooted again into Windows, and found that the ntfs partition had been emptied, i.e. all the data files were gone, and only one system file bootsqm.dat and one system directory System Volume Information were there, with their last updated time being the time when I first rebooted from Windows to Ubuntu (in fact, it is 4 hours in advanced than the actual time of that rebooting , see immediately below) Also I noticed that the time shown by Windows is not correct for my time zone (UTC-05:00) Eastern Time (US & Canada)), which is 4 hours in advance than the correct time (my current time is 3am, but the computer shows 7am). Same things happened when I rebooted into Ubuntu again: the ntfs has been emptied and left with only one Windows system file bootsqm.dat and one Windows system directory System Volume Information. the time shown by Ubuntu is 4 hours in advance than the correct time. I wonder what I can do to retrieve my data files back on the ntfs partition? If I am not able to do it myself, will some professionals be able to help me out? Thanks a lot! PS: I didn't think I did any thing that required emptying that partition. But there were quite some works I did during that stage right before the reboot from Windows to Ubuntu when the problem occured. Did I make any mis-operation?

    Read the article

  • Making Money from your SQL Server Blog

    - by Bill Graziano
    My SQL Server blog reading list is around one hundred blogs.  Many people are writing great content and generating lots of page views.  I see some of them running Google AdSense and trying to make a little money off their traffic.  If you want to earn some some extra money from what you’ve written there are a couple of options.  And one new option that I’m announcing here. Background Internet advertising is sold based on a few different pricing schemes.  Flat Fee.  You offer either all your impressions (page views) or some percentage of your impressions in exchange for a flat monthly fee.  CPM or cost per thousand impressions.  If the quoted price is $2 CPM you’ll get $2 for every 1,000 times the ad is displayed.  While you might think the “M” means millions, the “M” in CPM is the roman numeral for 1,000. CPC or cost per click.  This is also called PPC or pay per click.  In this method you get paid based on how many clicks there are on the ad.  CPA or cost per action.  In this method you get paid based on an action that occurs on the advertisers site after they click on the ad.  This is typically some type of sign up form.  This is how most affiliate programs work. Darren Rowse at ProBlogger has been writing about blogging and making money off blogs for years.  He has a good introduction to making money on your blog in his “Making Money” section.  If you’re interested in learning more he has a post up titled How to Make More Money From Your Blog in the New Year that links to many of his best posts on the subject. Google AdSense This is the most common method for people earning money from their blogging.  It’s easy to setup and administer.  You tell AdSense what size ads you’d like to run and it gives you a little piece of JavaScript to put on your site.  AdSense quickly learns the topics you write about and displays ads that are appropriate for your site.  I typically see ads for hosting, SQL Server tools and developer tools running in AdSense slots.  AdSense pays on a CPC model.  If you translate that back to CPM pricing you’ll see rates from $0.50 to $1.00 CPM. Amazon While you might not make much money writing books it’s now possible to make even less helping Amazon sell them.  You can sign up for an Amazon affiliate program.  Each time you send Amazon a link and someone buys the book you get a cut of that sale.  This is the CPA model from above.  Amazon can help you build some pretty nice “stores”.  Here’s the SQL Server bookstore I built for SQLTeam.com.  If you’re just putting in a page with books like I’ve done on SQLTeam you should keep your expectations low.  If you’re writing book reviews of suggesting books on your blog it really does make sense to setup an Amazon affiliate link.  People are much more likely to buy a book based on a review from a trusted source.  I always try to buy through a referral link if there is one. Amazon pays about 4% of the price as a referral fee.  You also get credit for anything else they buy while on the site.  I recently had someone buy an iPod nano with their SQL Server book making me an extra $5.60 richer!  Estimating how much you can make is difficult though.  How much attention you draw to the links and book reviews can dramatically affect the earnings. Private Ad Sales This is the hardest but potentially most lucrative option.  You sell advertising directly to companies that want to sell things to your readers.  Typically this would be SQL Server tool vendors, hosting companies or anyone else that wants to make money off database administrators.  This is also the most difficult to do.  You’ll need the contacts at the companies and enough page views to make it worth their while.  You’ll also need software to track the page views and clicks, geo-target your ads and smooth out the impressions.  Your earnings are based on whatever you can negotiate with the companies. SQL Server Ad Network For the last couple of years I’ve run any extra ads that I sold on the SQLTeam Weblogs.  You can see an example of that on Mladen’s blog.  The ad in the upper right corner is one that I’m running for him.  (Note: Many of the ads I’m running are geo-targeted to only appear in English speaking countries.  You may see a different set of ads outside the US, Canada and the UK.  You can also see he has a couple of Google ads on his blog.)  When I run ads on his blog I split the advertising revenue with him.  They make a little and I make a little. I recently started to expand this and sell advertising specifically to run on SQL Server-related blogs.  I’m also starting to run ads on non-SQLTeam blogs.  The only way I can sell more advertising is to have more blogs to run it on.  And that’s where you come in. I’ve created a SQL Server advertising network.  I handle all the ad sales and provide the technology to serve the ads.  I handle collections and payments back to you.  You get paid at the end of each month regardless of when (or if) the advertiser actually pays.  All you need to do is add a small piece of JavaScript to your site to display the ads. If you’re writing about SQL Server and interested in earning a little money for your site I’d like to talk to you.  You can use the Contact Us page on SQLTeam.com to reach me.  Running advertising on your blog isn’t for everyone.  If you’re concerned about what advertisers might think about certain posts then you might not be a good fit.  For the most part this isn’t an issue.  You’ll also need to have a PayPal account to receive payments.  You probably won’t get rich doing this.  But you can earn extra cash on the side for doing what you would do anyway.  I do know that people have earned enough to buy themselves a nice laptop doing this. My initial target is blogs with more than 10,000 page views per month.  I expect to pay two to three times what Google pays.  If you have less than 10,000 page views per month but are still interested I’d still like to hear from you.  I may not be able to sign up smaller blogs right away but we’ll get the process started.  If you’re unsure about your traffic Google Analytics is a free tool that provides great reporting on traffic, popular posts and how people find your blog.  If you have any questions or are just curious drop me a line and I’ll try to answer your questions.

    Read the article

  • OS Analytics - Deep Dive Into Your OS

    - by Eran_Steiner
    Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. We will have a call to discuss this blog - please join us!Date: Thursday, November 1, 2012Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D2. If requested, enter your name and email address.3. If a password is required, enter the meeting password: oracle1234. Click "Join". To join the teleconference:Call-in toll-free number:       1-866-682-4770  (US/Canada)      Other countries:                https://oracle.intercallonline.com/portlets/scheduling/viewNumbers/viewNumber.do?ownerNumber=5931260&audioType=RP&viewGa=true&ga=ONConference Code:       7629343#Security code:            7777# Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data View Solaris services status details Drill down into a process details View the busiest zones if applicable Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones: This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. Next, if you view a Solaris machine, you will have a "Services" tab: By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file). The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

< Previous Page | 8 9 10 11 12 13  | Next Page >