Search Results

Search found 64327 results on 2574 pages for 'test first'.

Page 32/2574 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • First steps with Oracle ADF Mobile for iOS and Android

    - by Bruno.Borges
    Oracle announced recently its new Mobile development platform, called Oracle ADF Mobile. With it, you can build truly Java applications, deploy and run real Java code on both Android and iOS with its self-contained Java runtime. It also comes with PhoneGap. which allows you to use any feature your phone offers, like sensors and camera. It's probably the most complete solution for mobile development out there, simply because with Oracle ADF Mobile, you can write Native, Hybrid or Web applications for your smartphone and tablet. Do you want to take a quick look on what can be done with it? Check out this video!  Now, to start with Oracle ADF Mobile, here are the first steps you will have to go through. Download Oracle JDeveloperGo to this link and download the install file for your environment (Windows, Linux-32bit or Generic) Install JDeveloper (of course)If you need help on this, look at the documentation (if you've downloaded 11gR2, click here) Download Oracle ADF Mobile BundleThis is the download page for Oracle ADF Mobile. Accept the license as usual at the top, and follow with the Download button. It will take you to another page, where you will see a table containing a download link. Click on it and it will start downloading a ZIP file. Start JDeveloperStart Oracle JDev. It may self update. Restart the IDE if you are asked to. Go to Help > Check for updates Click Next and make sure you are at the "Source" tab Select "Install From Local File" Select the Oracle ADF Mobile ZIP you downloaded on step 3 Finish the process   Now you have JDeveloper with Oracle ADF Mobile sucessfully installed! There are two great tutorials to start coding with ADF Mobile. Just choose your platform! Android Tutorial iOS Tutorial And have fun! :-) 

    Read the article

  • My first animation - Using SDL.NET C#

    - by Mark
    Hi all! I'm trying to animate a player object in my 2D grid when the user clicks somewhere in the screen. I got the following 4 variables: oX (Current player position X) oY (Current player position Y) dX (Destination X) dY (Destination Y) How can I make sure the player moves in a straight line to the new XY coordinates. The way I'm doing it now is really awfull and causes the player to first move along x axis, and finally in y axis. Can someone give me some guidance with the involved math cause I'm really not sure on how to accomplish this. Thank you for your time. Kind regards, Mark Update: It's working now but whats the right way to check if the current positions are equal to the target position? private static void MovePlayer(double x2, double y2, int duration) { double hX = x2 - m_PlayerPosition.X; double hY = y2 - m_PlayerPosition.Y; double Length = Math.Sqrt(Math.Pow(hX, 2) + Math.Pow(hY, 2)); hX = hX / Length; hY = hY / Length; while (m_PlayerPosition.X != Convert.ToInt32(x2) || m_PlayerPosition.Y != Convert.ToInt32(y2)) { m_PlayerPosition.X += Convert.ToInt32(hX * 1); m_PlayerPosition.Y += Convert.ToInt32(hY * 1); UpdatePlayerLocation(); } }

    Read the article

  • First Experience with Web Services

    When I first started programming with Microsoft .Net (1.0 Framework) I had a strong desire to learn how search engines indexed web sites. At that time I was a working as a search engine spammer creating web pages to generate traffic for specific themes for various clients. One way I attempted to better understand .Net was to build a web spider that analyzed web pages on demand. An example of the spider is hosted at AddLinkz.com. After my spider was built I had no real idea what I could/should do with it until I found the MSN Search API. I used this web service to compare its results with my spider. Additionally, I used the API to feed my .Net web spider new URLs from the API based on specific search terms. MSN’s search API was very easy to use, I just had to request information by calling a web URL with parameters via a Get request and the results were returned in XML. At that time all requests were limited to XML responses and a maximum of 1,000 results per query.   Since then the entire API has gone through several reconstructions, rebranding and new search services.  Microsoft’s new Bing API replaced the older MSN search API and added several new search capabilities. These new features allow search data to be returned for web searches, image searches, new searches, and related search terms to name a few. Bing API Version 2.0 SourceTypes Web Searches for web content Sushi Image Searches for images on the web Sushi News Searches news stories Sushi InstantAnswer Searches Encarta online what is sushi, convert 5 feet to meters, x*5=7, and 2 plus 2 Spell Searches Encarta dictionary for spelling suggestions Phonebook Searches phonebook entries sushi in Los Angeles RelatedSearch Returns the query strings most similar to yours Ad Returns advertisements to incorporate with results I currently plan to start using the web search feature from the new Bing 2.0 API in an open source project related to exception management. Currently, it is still in the conception phase.

    Read the article

  • Serious error on first attempts to dual boot Ubuntu 14.04 with Win7

    - by beetle
    I downloaded Ubuntu 14.04 from the website which I saved to my desktop with WinRar. My trial with winrar had expired so I have now tried it with Active@Isoburner but I'm getting no further. I eventually got it burnt onto a DVD(4.7gb) and tried to boot from DVD and normally. Neither way works. It looks like its about to boot but then a message appears saying that a serious error has occurred...the disk drive for /tmp is not ready yet or not present...press I to ignore, s to skip or m for manual... At this point I'm lost and unsure what to do. My laptop Toshiba Equium A210-17I is over 5 or 6 years old. Available space on the Hard Drive is 24gb. 2gb RAM. It originally came with Windows Vista Home Premium edition but about a year ago or more a friend wiped it clean for me as I was having no end of problems with Vista. He installed Windows 7 Ultimate(which I don't have a disc for). How can I resolve this issue and get Ubuntu to boot up? Do I have to install a previous version of Ubuntu first? Any advice or help would be greatly appreciated. Kind regards. Beetle.

    Read the article

  • My first blog post…

    - by steveh99999
    I’ve been meaning to start a blog for a while now, (OK, for several years…..) - finally now, here it begins First post, something really simple but, a wise-man once told me about the best way to improve SQL server performance. Store Less Data. That's it.. that's all there is to it... Over the years, I've seen the following :- -  a 200Gb database which held 3 days data. Once business requirements changed, we were able to hold only 1 days data in this database. -  a table developed by DBAs to hold application table cardinality information - that information was collected at 2 hour intervals every day for 7 years ! After 7 years the DBA space-info table had become the largest table in the database - 60 million rows !  It was a simple change to remove alot of the historical intra-day data and change the schedule to run only once per evening. Suddenly that table held 6 million rows instead of 60 million.... - lots of backup and restore history held in msdb. See this post by Brent Ozar for more details on this issue. Imagine how much faster the backups, DBCC Checks and reindexes ran when the above 3 changes were implemented ?   How often do you review your big databases \ tables to see if you’re actually holding only data that is really required by the business ?

    Read the article

  • Seek first to understand, then to be understood

    - by BuckWoody
    One of the most important (and most difficult) lessons for a technical professional to learn is to not jump to the solution. Perhaps you’ve done this, or had it happen to you. As the person you’re “listening” to is speaking, your mind is performing a B-Tree lookup on possible solutions, and when the final node of the B-Tree in your mind is reached, you blurt out the “only” solution there is to the problem, whether they are done or not. There are two issues here – both of them fatal if you don’t factor them in. First, your B-Tree may not be complete, or correct. That of course leads to an incorrect response, which blows your credibility. People will not trust you if this happens often. The second danger is that the person may modify their entire problem with a single word or phrase. I once had a client explain a detailed problem to me – and I just KNEW the answer. Then they said at the end “well, that’s what it used to do, anyway. Now it doesn’t do that anymore.” Which of course negated my entire solution – happily I had kept my mouth shut until they finished. So practice listening, rather than waiting for your turn to speak. Let the person finish, let them get the concept out, give them your full attention. They’ll appreciate the courtesy, you’ll look more intelligent, and you both may find the right answer to the problem. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OAM OVD integration - Error Encounterd while performance test "LDAP response read timed out, timeout used:2000ms"

    - by siddhartha_sinha
    While working on OAM OVD integration for one of my client, I have been involved in the performance test of the products wherein I encountered OAM authentication failures while talking to OVD during heavy load. OAM logs revealed the following: oracle.security.am.common.policy.common.response.ResponseException: oracle.security.am.engines.common.identity.provider.exceptions.IdentityProviderException: OAMSSA-20012: Exception in getting user attributes for user : dummy_user1, idstore MyIdentityStore with exception javax.naming.NamingException: LDAP response read timed out, timeout used:2000ms.; remaining name 'ou=people,dc=oracle,dc=com' at oracle.security.am.common.policy.common.response.IdentityValueProvider.getUserAttribute(IdentityValueProvider.java:271) ... During the authentication and authorization process, OAM complains that the LDAP repository is taking too long to return user attributes.The default value is 2 seconds as can be seen from the exception, "2000ms". While troubleshooting the issue, it was found that we can increase the ldap read timeout in oam-config.xml.  For reference, the attribute to add in the oam-config.xml file is: <Setting Name="LdapReadTimeout" Type="xsd:string">2000</Setting> However it is not recommended to increase the time out unless it is absolutely necessary and ensure that back-end directory servers are working fine. Rather I took the path of tuning OVD in the following manner: 1) Navigate to ORACLE_INSTANCE/config/OPMN/opmn folder and edit opmn.xml. Search for <data id="java-options" ………> and edit the contents of the file with the highlighted items: <category id="start-options"><data id="java-bin" value="$ORACLE_HOME/jdk/bin/java"/><data id="java-options" value="-server -Xms1024m -Xmx1024m -Dvde.soTimeoutBackend=0 -Didm.oracle.home=$ORACLE_HOME -Dcommon.components.home=$ORACLE_HOME/../oracle_common -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/opt/bea/Middleware/asinst_1/diagnostics/logs/OVD/ovd1/ovdGClog.log -XX:+UseConcMarkSweepGC -Doracle.security.jps.config=$ORACLE_INSTANCE/config/JPS/jps-config-jse.xml"/><data id="java-classpath" value="$ORACLE_HOME/ovd/jlib/vde.jar$:$ORACLE_HOME/jdbc/lib/ojdbc6.jar"/></category></module-data><stop timeout="120"/><ping interval="60"/></process-type> When the system is busy, a ping from the Oracle Process Manager and Notification Server (OPMN) to Oracle Virtual Directory may fail. As a result, OPMN will restart Oracle Virtual Directory after 20 seconds (the default ping interval). To avoid this, consider increasing the ping interval to 60 seconds or more. 2) Navigate to ORACLE_INSTANCE/config/OVD/ovd1 folder.Open listeners.os_xml file and perform the following changes: · Search for <ldap id=”Ldap Endpoint”…….> and point the cursor to that line. · Change threads count to 200. · Change anonymous bind to Deny. · Change workQueueCapacity to 8096. Add a new parameter <useNIO> and set its value to false viz: <useNIO>false</useNio> Snippet: <ldap version="8" id="LDAP Endpoint"> ....... .......  <socketOptions><backlog>128</backlog>         <reuseAddress>false</reuseAddress>         <keepAlive>false</keepAlive>         <tcpNoDelay>true</tcpNoDelay>         <readTimeout>0</readTimeout>      </socketOptions> <useNIO>false</useNIO></ldap> Restart OVD server. For more information on OVD tuneup refer to http://docs.oracle.com/cd/E25054_01/core.1111/e10108/ovd.htm. Please Note: There were few patches released from OAM side for performance tune-up as well. Will provide the updates shortly !!!

    Read the article

  • Test and Report Add-on Compatibility in Firefox

    - by Asian Angel
    Now that the new version of Firefox is out you probably have a favorite extension or two that has not updated yet. You can get that extension working again, test it, and report back to Mozilla on how well it does with the Add-on Compatibility Reporter extension. Before For our example we chose a great extension that unfortunately has not been updated yet. As you can see here Firefox is refusing to let the extension install. After As soon as you install Add-on Compatibility Reporter you will be presented with an information page on how the extension works and what you can do with it. You should definitely take a moment to read this as it is very helpful. After trying our non-compatible extension again we were able to proceed with the install process. Notice at the bottom that “compatibility checking” has been overridden. Success! As soon as we restarted our browser it was easy to see the “non-compatible icon” in the “Add-ons Manager Window”…but the extension did install though (terrific!). Clicking on the extension’s entry will reveal a new button in the lower right corner. Using the “Compatibility Drop-Down Menu” you can report if the extension is working as well as before or if it is actually having problems. The extension that we used for our example had no problems whatsoever so good news there. Whichever option you choose you will be presented with a small “Report Window” with information about the extension, your browser’s version number, and your operating system. Click “Submit Report” to send it on its’ way. You will see a confirmation message letting you know that your report was successfully submitted. While the extension itself has not been altered in any form at least you have it working again and have helped verify whether it still works well or not. Notice the “notation” present now in place of the “Compatibility Button” that lets you know that you have already taken care of that particular extension. Looking great… Conclusion If you have a favorite extension that you miss using in the newest release of Firefox then this is definitely an extension to add to your browser. Not only will your extension start working again but you can let Mozilla know how well it is working and (hopefully) help get the extension updated. Links Download the Add-on Compatibility Reporter extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Firefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible ExtensionsUsing Windows 7 or Vista Compatibility ModeMysticgeek Blog: Generate A System Health Report In VistaCheck Extension Compatibility for Upcoming Firefox ReleasesMake Safari Stop Crashing Every 20 Seconds on Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • First-Time GLSL Shadow Mapping Problems

    - by Locke
    I'm working on building out a 2.5D engine and having massive problems getting my shadows working. I'm at a point where I'm VERY close. So, let's see a picture to see what I have: As you can see above, the image has lighting -- but the shadow map is displaying incorrectly. The shadow map is shown in the bottom left hand side of the screen as a normal 2D texture, so we can see what it looks like at any given time. If you notice, it appears that the shadows are generating backwards in the wrong direction -- I think. But the problem is a little more deep -- I'm just plotting the shadow onto the screen, which I know is wrong -- I'm ignoring the actual test to see if we NEED to show a shadow. The incoming parameters all appear to be correct -- so there has to be something wrong with my shader code somewhere. Here's what my code looks like: VERTEX: uniform mat4 LightModelViewProjectionMatrix; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { Normal = normalize(gl_NormalMatrix * gl_Normal); LightDirection = normalize(gl_NormalMatrix * gl_LightSource[0].position.xyz); LightCoordinate = LightModelViewProjectionMatrix * gl_Vertex; LightCoordinate.xy = ( LightCoordinate.xy * 0.5 ) + 0.5; gl_Position = ftransform(); gl_TexCoord[0] = gl_MultiTexCoord0; } FRAGMENT: uniform sampler2D DiffuseMap; uniform sampler2D ShadowMap; varying vec3 Normal; // The eye-space normal of the current vertex. varying vec4 LightCoordinate; // The texture coordinate of the light of the current vertex. varying vec3 LightDirection; // The eye-space direction of the light. void main() { vec4 Texel = texture2D(DiffuseMap, vec2(gl_TexCoord[0])); // Directional lighting //Build ambient lighting vec4 AmbientElement = gl_LightSource[0].ambient; //Build diffuse lighting float Lambert = max(dot(Normal, LightDirection), 0.0); //max(abs(dot(Normal, LightDirection)), 0.0); vec4 DiffuseElement = ( gl_LightSource[0].diffuse * Lambert ); vec4 LightingColor = ( DiffuseElement + AmbientElement ); LightingColor.r = min(LightingColor.r, 1.0); LightingColor.g = min(LightingColor.g, 1.0); LightingColor.b = min(LightingColor.b, 1.0); LightingColor.a = min(LightingColor.a, 1.0); LightingColor *= Texel; //Everything up to this point is PERFECT // Shadow mapping // ------------------------------ vec4 ShadowCoordinate = LightCoordinate / LightCoordinate.w; float DistanceFromLight = texture2D( ShadowMap, ShadowCoordinate.st ).z; float DepthBias = 0.001; float ShadowFactor = 1.0; if( LightCoordinate.w > 0.0 ) { ShadowFactor = DistanceFromLight < ( ShadowCoordinate.z + DepthBias ) ? 0.5 : 1.0; } LightingColor.rgb *= ShadowFactor; //gl_FragColor = LightingColor; //Yes, I know this is wrong, but the line above (gl_FragColor = LightingColor;) produces the wrong effect gl_FragColor = LightingColor * texture2D( ShadowMap, ShadowCoordinate.st ); } I wanted to make sure the coordinates were correct for the shadow map -- so that's why you see it applied to the image as it is below. But the depth for each point seems to be wrong -- the shadows SHOULD be opposite (look at how the image is -- the shaded areas from normal lighting are facing the opposite direction of the shadows). Maybe my matrices are bad or something going in? They're isolated and appear to be correct -- nothing else is going in unusual. When I view from the light's view and get the MVP matrices for it, they're correct. EDIT: Added an image so you can see what happens when I do the correct command at the end of the GLSL: That's the image when the last line is just glFragColor = LightingColor; Maybe someone has some idea of what I screwed up?

    Read the article

  • JOGL2 test compiles, but doesn't execute - help?

    - by Chuchinyi
    I have a problem with JOGL2. My JOGL2Template.java compiles fine, but executing it results in the following error: D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more Here is the JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } } Any ideas what might be the cause of these errors?

    Read the article

  • Test your internet connection - Emtel Fixed Broadband

    Already at the begin of April, I had a phone conversation with my representative at Emtel Ltd. about some upcoming issues due to the ongoing construction work in my neighbourhood. Unfortunately, they finally raised the house two levels above ours, and of course this has to have a negative impact on the visibility between the WiMAX outdoor unit on the roof and the aimed access point at Medine. So, today I had a technical team here to do a site survey and to come up with potential solutions. Short version: It doesn't look good after all. The site survey Well, the two technicians did their work properly, even re-arranged the antenna to check the connection with another end point down at La Preneuse. But no improvements. Looks like we are out of luck since the construction next door hasn't finished yet and at the moment, it even looks like they are planning to put at least one more level on top. I really wonder about the sanity of the responsible bodies at the local district council. But that's another story. Anyway, the outdoor unit was once again pointed towards Medine and properly fixed with new cable guides (air from the sea and rust...). Both of them did a good job and fine-tuned the reception signal to a mere 3 over 9; compared to the original 7 over 9 I had before the daily terror started. The site survey has been done, and now it's up to Emtel to come up with (better) solutions. Well, I wouldn't mind to have an unlimited, symmetric 3G/UMTS or even LTE connection. Let's see what they can do... Testing the connection There are several online sites available which offer you to check certain aspects of your internet connection. Personally, I'm used to speedtest.net and it works very well. I think it is good and necessary to check your connection from time to time, and only a couple of days ago, I posted the following on Emtel's wall at Facebook (21.05.2013 - 14:06 hrs): Dear Emtel, could you eventually provide an answer on the miserable results of SpeedTest? I chose Rose Hill (Hosted by Emtel Ltd.) as testing endpoint... Sadly, no response to this. Seems that the marketing department is not willing to deal with customers on Facebook. Okay, over at speedtest.net you can use their Flash-based test suite to check your connection to quite a number of servers of different providers world-wide. It's actually very interesting to see the results for different end points and to compare them to each other. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 30.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Fixed Broadband) Speedtest.net result of 30.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Fixed Broadband) Luckily, the results are quite similar in terms of connection speed; which is good. I'm currently on a WiMAX tariff called 'Classic Browsing 2', or Fixed Broadband as they call it now, which provides a symmetric line of 768 Kbps (or roughly 0.75 Mbps). In terms of downloads or uploads this means that I would be able to transfer files in either direction with approximately 96 KB/s. Frankly speaking, thanks to compression, my choice of browser and operating system I usually exceed this value and I have download rates up to 120 KB/s - not too bad after all. Only the ping times are a little bit of concern. Due to the difference in distance, or better said based on the number of hubs between the endpoints, they indicate the amount of time that it takes to send a package from your machine to the remote server and get a response back. A lower value is better, and usually the ping is less than 300 ms between Mauritius and Europe. The alternatives in Mauritius Not sure whether I should note this done because for my requirements there are no alternatives to Emtel WiMAX at the moment. It would be great to have your opinion on the situation of internet connectivity in Mauritius. Are there really alternatives? And if so, what are the conditions?

    Read the article

  • Test Drive for Partners on Oracle Endeca Information Discovery

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} Specifically for Oracle Partners, this half-day hands-on workshop allows you to experience Information Discovery from Oracle in order to: Understand Information Discovery and how it compliments classic BI solutions Use Search and Guided Navigation to see how structured and unstructured information can be rapidly brought together to unlock hidden value Explore all of your data in any format and from any source including social media, market surveys and reports Lay the foundation for helping business users who need fast answers to new questions Experience the amazing performance of Endeca on Oracle's in memory Exalytics machine Agenda After an introduction to Oracle Endeca Information Discovery, follow a self-paced, supervised, hands-on tutorial where you will see how easy it is to: Use Guided Navigation and Search to explore structured and unstructured data Rapidly integrate new and changing data sources such as Social Media Build new Discovery user interfaces Rapidly respond to changing business needs and data environments And ask questions of Oracle's Business Analytics experts throughout When 14th March 2013, Registration 9:00 a.m. - finish by 1:00 p.m.      Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} Register Now What: Oracle Endeca Information Discovery Test Drive Where: Oracle City Office, 1 South Place, London, EC2M 2RB

    Read the article

  • SO-Aware at the Atlanta Connected Systems User Group

    - by gsusx
    Today my colleague Don Demsak will be presenting a session about WCF management, testing and governance using SO-Aware and the SO-Aware Test Workbench at the Connected Systems User Group in Atlanta . Don is a very engaging speaker and has prepared some very cool demos based on lessons of real world WCF solutions. If you are in the ATL area and interested in WCF, AppFabric, BizTalk you should definitely swing by Don’s session . Don’t forget to heckle him a bit (you can blame it for it ;) )...(read more)

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • SQL SERVER – Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical functions FIRST_VALUE() and LAST_VALUE(). This function returns first and last value from the list. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. Now let’s have fun following query: USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: What’s the most interesting thing here is that as we go from row 1 to row 10, the value of the FIRST_VALUE() remains the same but the value of the LAST_VALUE is increasing. The reason behind this is that as we progress in every line – considering that line and all the other lines before it, the last value will be of the row where we are currently looking at. To fully understand this statement, see the following figure: This may be useful in some cases; but not always. However, when we use the same thing with PARTITION BY, the same query starts showing the result which can be easily used in analytical algorithms and needs. Let us have fun through the following query: Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, FIRST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID) FstValue, LAST_VALUE(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID) LstValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO The above query will give us the following result: Let us understand how PARTITION BY windows the resultset. I have used PARTITION BY SalesOrderID in my query. This will create small windows of the resultset from the original resultset and will follow the logic or FIRST_VALUE and LAST_VALUE in this resultset. Well, this is just an introduction to these functions. In the future blog posts we will go deeper to discuss the usage of these two functions. By the way, these functions can be applied over VARCHAR fields as well and are not limited to the numeric field only. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • In rails, what defines unit testing as opposed to other kinds of testing

    - by junky
    Initially I thought this was simple: unit testing for models with other testing such as integration for controller and browser testing for views. But more recently I've seen a lot of references to unit testing that doesn't seem to exactly follow this format. Is it possible to have a unit test of a controller? Does that mean that just one method is called? What's the distinction? What does unit testing really means in my rails world?

    Read the article

  • First Ever MySQL on Windows Online Forum - March 16, 2011

    - by monica.kumar
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";} Now you might be thinking…what’s an Online Forum? Well, think of it as a virtual conference, where you can attend a series of presentations about a given topic, from the comfort of your own office/home. On Wednesday March 16th, from 9.00 am PT to 12.00, we will be running the first ever MySQL Online Forum, dedicated to MySQL on Windows. Register now to learn how you can reduce your database TCO on Windows by up to 90% while increasing manageability & flexibility!   Oracle’s MySQL Vice President of Engineering Tomas Ulin will kick off a comprehensive agenda of presentations enabling you to better understand:   How you can save up to 90% by using MySQL on WindowsWhy the world’s most popular open source database is extremely popular on Windows, both for enterprise users and for embedding by ISVs How MySQL is a great fit for the Windows environment, and what are the upcoming milestones to make MySQL even better on the Microsoft platform What are the visual tools at your disposal to effectively develop, deploy and manage MySQL applications on Windows How you can deliver highly available business critical Windows based MySQL applications Why Security Solutions Provider SonicWall selected MySQL over Microsoft SQL Server, and how they successfully deliver MySQL based solutions Plus, as we’ll have Live Chat On during the entire forum, you’ll be able to ask questions at any time to MySQL experts online. Register Now!   Whether you’re an ISV or an enterprise user, either already running MySQL on Windows or simply considering it, join us and learn how you can get performance, lower TCO and increased manageability & flexibility with MySQL on Windows!

    Read the article

  • Beginning with first project on game development [closed]

    - by Tsvetan
    Today is the day I am going to start my first real game project. It will be a Universe simulator. Basically, you can build anything from tiny meteor to quazars and universes. It is going to be my project for an olympiad in IT in my country and I really want to make it perfect(at least a bronze medal). So, I would like to ask some questions about organization and development methodologies. Firstly, my plan is to make a time schedule. In it I would write my plans for the next month or two(because that is the time I have). With this exact plan I hope to make my organisation at its best. Of course, if I am doing sth faster than the schedule I would involve more features for the game and/or continue with the tempo I have. Also, for the organisation I would make a basic pseudocode(maybe) and just rewrite it so it is compilable. Like a basic skeleton of everything. The last is an idea I tought of in the moment, but if it is good I will use it. Secondly, for the development methodologies, obviously, I think of making object-oriented code and make everything perfect(a lot of testing, good code, documentation etc.). Also, I am going to make my own menu system(I read that OpenGL hasn't got very good one). Maybe I would implement it with an xml file, holding the info about position of buttons, text boxes, images and everything. Maybe I would do a specific CSS for it and so on. I think that is very good way of doing the menu system, because it makes the presentation layer separate of the logic. But, if there is a better way, I would do it the better way. For the logic, well, I don't have much to say. OO code, testing, debuging, good and fast algorithms and so on. Also, a good documentation must be written and this is the area I need to make some research in. I think that is for now. I hope I have been enough descriptive. If more questions come on my mind, I will ask them. Edit: I think of blogging every part of the project, or at least writing down everything in a file or something like that. My question is: Is my plan of how to do everything around the project good? And if not, what is necessary to be improved and what other things I can involve for making the project good.

    Read the article

  • Executing server validators first before OnClientClick Javascript confirm/alert

    - by kaushalparik27
    I got to answer a simple question over community forums. Consider this: Suppose you are developing a webpage with few input controls and a submit button. You have placed some server validator controls like RequiredFieldValidator to validate the inputs entered by the user. Once user fill-in all the details and try to submit the page via button click you want to alert/confirm the submission like "Are you sure to modify above details?". You will consider to use javascript on click of the button.Everything seems simple and you are almost done. BUT, when you run the page; you will see that Javascript alert/confirm box is executing first before server validators try to validate the input controls! Well, this is expected behaviour. BUT, this is not you want. Then? The simple answer is: Call Page_ClientValidate() in javascript where you are alerting the submission. Below is the javascript example:    <script type="text/javascript" language="javascript">        function ValidateAllValidationGroups() {            if (Page_ClientValidate()) {                return confirm("Are you sure to modify above details?");            }        }    </script>Page_ClientValidate() function tests for all server validators and return bool value depends on whether the page meets defined validation criteria or not. In above example, confirm alert will only popup up if Page_ClientValidate() returns true (if all validations satisfy). You can also specify ValidationGroup inside this function as Page_ClientValidate('ValidationGroup1') to only validate a specific group of validation in your page.        function ValidateSpecificValidationGroup() {            if (Page_ClientValidate('ValidationGroup1')) {                return confirm("Are you sure to modify above details?");            }        }I have attached a sample example with this post here demonstrating both above cases. Hope it helps./.

    Read the article

  • The first day of JavaOne is already over!

    - by delabassee
    In the past Sunday used to be a more relaxing day with ‘just’ some JavaOne activities going on. Sunday used to be a soft day to prepare yourself for an exhausting week. This is now over as JavaOne is expanding; Sunday is now an integral part of the conference. One of the side effect of this extra day is that some activities related to JavaOne and OpenWorld such as MySQL Connect are being push to start a day earlier on Saturday (can you spot the pattern here?). On the GlassFish front, Sunday was a very busy day! It started at the Moscone Center with the annual GlassFish Community Event where the Java EE 7 and GF 4 roadmaps were presented and discussed. During the event, different GlassFish users such as ZeroTurnaround (the JRebel guys), Grupo RBS and IDR Solutions shared their views on GF, why they like GF but also what could be improved. The event was also a forum for the GF community to exchange with some of the key Java EE / GlassFish Oracle Executives and the different GF team members. The Strategy keynote and the Technical keynote were held in the Masonic Auditorium later in the after-noon. Oracle executives have presented the plans for Java SE, Java FX and Java EE. As on-demand replays will be available soon, I will not summarize several hours of content but here are some personal takeaways from those keynotes. Modularity Modularity is a big deal. We know by now that Project Jigsaw will not be ready for Java SE 8 but in any case, it is already possible (and encouraged) to test Jigsaw today. In the future, Java EE plan to rely on the modularity features provided by Java SE, so Project Jigsaw is also relevant for Java EE developers. Shorter term, to cover some of the modular requirements, Java SE will adopt the approach that was used for Java EE 6 and the notion of Profiles. This approach does not define a module system per say; Profiles is a way to clearly define different subsets of Java SE to fulfill different needs (e.g. the full JRE is not required for a headless application). The introduction of different Profiles, from the Base profile (10mb) to the Full Profile (+50mb), has been proposed for Java SE 8. Embedded Embedded is a strong theme going forward for the Java Plaform. There is now a dedicated program : Java Embedded @ JavaOne Java by nature (e.g. platform independence, built-in security, ability easily talks to any back-end systems, large set of skills available on the market, etc.) is probably the most suited platform for the Internet of Things. You can quickly be up-to-speed and develop services and applications for that space just by using your current Java skills. All you need to start developing on ARM is a 35$ Raspberry Pi ARM board (25$ if you are cheap and can live without an ethernet connection) and the recently released JDK for Linux/ARM. Obviously, GlassFish runs on Raspberry Pi. If you wan to go further in the embedded space, you should take a look Java SE Embedded, an optimized, low footprint, Java environment that support the major embedded architectures (ARM, PPC and x86). Finally, Oracle has recently introduced Java Embedded Suite, a new solution that brings modern middleware capabilities to the embedded space. Java Embedded Suite is an optimized solution that leverage Java SE Embedded but also GlassFish, Jersey and JavaDB to deploy advanced value added capabilities (eg. sensor data filtering and) deeper in the network, closer to the devices. JavaFX JavaFX is going strong! Starting from Java SE 7u6, JavaFX is bundled with the JDK. JavaFX is now available for all the major desktop platforms (Windows, Linux and Mac OS X). JavaFX is now also available, in developer preview, for low end device running Linux/ARM. During the keynote, JavaFX was shown running on a Raspberry Pi! And as announced during the keynote, JavaFX should be fully open-sourced by the end of the year; contributions are welcome!. There is a strong momentum around JavaFX, it’s the ideal client solution for the Java platform. A client layer that works perfectly with GlassFish on the back-end. If you were not convince by JavaFX, it’s time to reconsider it! As an old Chinese proverb say “One tweet is worth a thousand words!” HTML5, Project Avatar and Java EE 7 HTML5 got a lot of airtime too, it was covered during the Java EE 7 section of the keynote. Some details about Project Avatar, Oracle’s incubator project for a TSA (Thin Server Architecture) solution, were diluted and shown during the keynote. On the tooling side, Project Easel running on NetBeans 7.3 beta was demo’ed, including a cool NetBeans debugging session running in Chrome! HTML 5, Project Avatar and Java EE 7 deserve separate posts... Feedback We need your feedback! There are many projects, JSRs and products cooking : GlassFish 4, Project Jigsaw, Concurrency Utilities for Java EE (JSR 236), OpenJFX, OpenJDK to name just a few. Those projects, those specifications will have a profound impact on the Java platform for the years to come! So if you have the opportunity, download, install, learn, tests them and give feedback! Remember, you can "Make the Future Java!" Finally, the traditional GlassFish Party at the Thirsty Bear concluded the first JavaOne day. This party is another place where the community can freely exchange with the GlassFish team in a more relaxed, more friendly (but sometime more noisy) atmosphere. Arun has posted a set of pictures to reflect the atmosphere of the keynotes and the GlassFish party. You can find more details on the others Java EE and GlassFish activities here.

    Read the article

  • TypeScript first impressions

    - by Bertrand Le Roy
    Anders published a video of his new project today, which aims at creating a superset of JavaScript, that compiles down to regular current JavaScript. Anders is a tremendously clever guy, and it always shows in his work. There is much to like in the enterprise (good code completion, refactoring and adoption of the module pattern instead of namespaces to name three), but a few things made me rise an eyebrow. First, there is no mention of CoffeeScript or Dart, but he does talk briefly about Script# and GWT. This is probably because the target audience seems to be the same as the audience for the latter two, i.e. developers who are more comfortable with statically-typed languages such as C# and Java than dynamic languages such as JavaScript. I don’t think he’s aiming at JavaScript developers. Classes and interfaces, although well executed, are not especially appealing. Second, as any code generation tool (and this is true of CoffeeScript as well), you’d better like the generated code. I didn’t, unfortunately. The code that I saw is not the code I would have written. What’s more, I didn’t always find the TypeScript code especially more expressive than what it gets compiled to. I also have a few questions. Is it possible to duck-type interfaces? For example, if I have an IPoint2D interface with x and y coordinates, can I pass any object that has x and y into a function that expects IPoint2D or do I need to necessarily create a class that implements that interface, and new up an instance that explicitly declares its contract? The appeal of dynamic languages is the ability to make objects as you go. This needs to be kept intact. More technical: why are generated variables and functions prefixed with _ rather than the $ that the EcmaScript spec recommends for machine-generated variables? In conclusion, while this is a good contribution to the set of ideas around JavaScript evolution, I don’t expect a lot of adoption outside of the devoted Microsoft developers, but maybe some influence on the language itself. But I’m often wrong. I would certainly not use it because I disagree with the central motivation for doing this: Anders explicitly says he built this because “writing application-scale JavaScript is hard”. I would restate that “writing application-scale JavaScript is hard for people who are used to statically-typed languages”. The community has built a set of good practices over the last few years that do scale quite well, and many people are successfully developing and maintaining impressive applications directly in JavaScript. You can play with TypeScript here: http://www.typescriptlang.org

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • Bullet Physics - Casting a ray straight down from a rigid body (first person camera)

    - by Hydrocity
    I've implemented a first person camera using Bullet--it's a rigid body with a capsule shape. I've only been using Bullet for a few days and physics engines are new to me. I use btRigidBody::setLinearVelocity() to move it and it collides perfectly with the world. The only problem is the Y-value moves freely, which I temporarily solved by setting the Y-value of the translation vector to zero before the body is moved. This works for all cases except when falling from a height. When the body drops off a tall object, you can still glide around since the translate vector's Y-value is being set to zero, until you stop moving and fall to the ground (the velocity is only set when moving). So to solve this I would like to try casting a ray down from the body to determine the Y-value of the world, and checking the difference between that value and the Y-value of the camera body, and disable or slow down movement if the difference is large enough. I'm a bit stuck on simply casting a ray and determining the Y-value of the world where it struck. I've implemented this callback: struct AllRayResultCallback : public btCollisionWorld::RayResultCallback{ AllRayResultCallback(const btVector3& rayFromWorld, const btVector3& rayToWorld) : m_rayFromWorld(rayFromWorld), m_rayToWorld(rayToWorld), m_closestHitFraction(1.0){} btVector3 m_rayFromWorld; btVector3 m_rayToWorld; btVector3 m_hitNormalWorld; btVector3 m_hitPointWorld; float m_closestHitFraction; virtual btScalar addSingleResult(btCollisionWorld::LocalRayResult& rayResult, bool normalInWorldSpace) { if(rayResult.m_hitFraction < m_closestHitFraction) m_closestHitFraction = rayResult.m_hitFraction; m_collisionObject = rayResult.m_collisionObject; if(normalInWorldSpace){ m_hitNormalWorld = rayResult.m_hitNormalLocal; } else{ m_hitNormalWorld = m_collisionObject->getWorldTransform().getBasis() * rayResult.m_hitNormalLocal; } m_hitPointWorld.setInterpolate3(m_rayFromWorld, m_rayToWorld, m_closestHitFraction); return 1.0f; } }; And in the movement function, I have this code: btVector3 from(pos.x, pos.y + 1000, pos.z); // pos is the camera's rigid body position btVector3 to(pos.x, 0, pos.z); // not sure if 0 is correct for Y AllRayResultCallback callback(from, to); Base::getSingletonPtr()->m_btWorld->rayTest(from, to, callback); So I have the callback.m_hitPointWorld vector, which seems to just show the position of the camera each frame. I've searched Google for examples of casting rays, as well as the Bullet documentation, and it's been hard to just find an example. An example is really all I need. Or perhaps there is some method in Bullet to keep the rigid body on the ground? I'm using Ogre3D as a rendering engine, and casting a ray down is quite straightforward with that, however I want to keep all the ray casting within Bullet for simplicity. Could anyone point me in the right direction? Thanks.

    Read the article

  • Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5

    - by Shub Lahiri, A-Team
    Background SOA Suite for Healthcare Integration pack comes with a pre-installed simulator that can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. This is a command-line utility that can be very handy when trying to build a complete end-to-end demo within a standalone, closed environment. The ant-based utility accepts the name of a configuration file as the command-line input argument. The format of this configuration file has changed between PS4 and PS5. In PS4, the configuration file was XML based and in PS5, it is name-value property based. The rest of this note highlights these differences and provides samples that can be used to run the first scenario from the product samples set. PS4 - Configuration File The sample configuration file for PS4 is shown below. The configuration file contains information about the following items: Directory for incoming and outgoing files for the host running SOA Suite Healthcare Polling Interval for the directory External Endpoint Logical Names External Endpoint Server Host Name and Ports Message throughput to be simulated for generating outbound messages Documents to be handled by different endpoints A copy of this file can be downloaded from here. PS5 - Configuration File The corresponding sample configuration file for PS5 is shown below. The configuration file contains similar information about the sample scenario but is not in XML format. It has name-value pairs specified in the form of a properties file. This sample file can be downloaded from here. Simulator Configuration Before running the simulator, the environment has to be set by defining the proper ANT_HOME and JAVA_HOME. The following extract is taken from a working sample shell script to set the environment: Also, as a part of setting the environment, template jndi.properties and logging.properties can be generated by using the following ant command: ant -f ant-b2bsimulator-util.xml b2bsimulator-prop Sample jndi.properties and logging.properties are shown below and can be modified, as needed. The jndi.properties contains information about connectivity to the local Weblogic Managed Server instance and the logging.properties file controls the amount of logging that can be generated from the running simulator process. Simulator Usage - Start and Stop The command syntax to launch the simulator via ant is the same in PS4 and PS5. Only the appropriate configuration file has to be supplied as the command-line argument, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstart -Dargs="simulator1.hl7-config.xml" This will start the simulator and will keep running to provide an active external endpoint for SOA Healthcare Integration engine. To stop the simulator, a similar ant command can be used, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstop

    Read the article

  • My First 5K, the recap.

    - by Chris Williams
    It was a nice day to be outside (and trust me, those aren't words you'll hear from me often.) I got to the site around 7:45, hit the pre-reg table and got my number along with a goody bag full of coupons for racing gear, a water bottle and a tshirt. Oh and a map. Stashed all that stuff in the jeep, emptied my pockets of everything but my iPhone and my jeep key, and proceeded to walk around for a bit as people started showing up and signing in. It was fairly breezy, and there was definitely a storm coming... but it was anyone's guess on when it would actually arrive. It was interesting to see everyone who was participating. If I had to guess, I would say the event was 60-70% women, with a pretty broad distribution of age... as young as 13 to well over 60 (in both genders.) I don't know exactly how many folks were there, but it was well over 300. Eventually it was time to kick things off, and everyone made their way to the start line. All of the 5k and 10k runners were mixed together, starting at the same time. All the walkers and the people with strollers or dogs were in the back. It was pretty chaotic at first, once things started, but it thinned out fairly quickly. The 10k people and the hardcore runners sped ahead of everyone else and the walkers gradually lagged behind. The 5K course was pretty nice, winding around a lake down in Eden Prairie. The 10K course overlapped most of ours, but branched off a couple times too. I didn't run the whole time, but I started the race running and I ended it running, and did a mix of walking and running along the way. I met my goals, which were a) don't ever stop and b) don't be last. The weather managed to hold out for the entire race. It never got too hot, there was a nice breeze and it was mostly overcast. Pretty much perfect in my book. About 20-30 minutes after I left, the rain came down pretty hard. I had a good time, and will most likely do more of them. We'll see.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >