Search Results

Search found 1308 results on 53 pages for 'dr hannibal lecter'.

Page 50/53 | < Previous Page | 46 47 48 49 50 51 52 53  | Next Page >

  • EJB client can't find a DataSource that tests successfully in WebLogic admin console

    - by suszterpatt
    Disclaimer: I'm completely new to JEE/EJB and all that, so bear with me. I have a simple EJB that I can successfully deploy using JDeveloper 11g's integrated WebLogic server and a remote database connection (JDBC). I have a DataSource named "PGY2" defined in WebLogic, and I can test it successfully from the admin console. Here's the code of the client I'm trying to test it with (generated entirely by JDev except for the three method calls on adminManager): public class AdminManagerClient { public static void main(String [] args) { try { final Context context = getInitialContext(); AdminManager adminManager = (AdminManager)context.lookup("Uran-AdminManager#hu.elte.pgy2.BACNAAI.UranEJB.AdminManager"); adminManager.addAdmin("root", "root", "Kovács Isten"); adminManager.addStudent("BACNAAI", "matt", "B Cs", 2005); adminManager.addTeacher("SIPKABT", "patt", "S P", "numanal", "Dr."); } catch (Exception ex) { ex.printStackTrace(); } } private static Context getInitialContext() throws NamingException { Hashtable env = new Hashtable(); // WebLogic Server 10.x connection details env.put( Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory" ); env.put(Context.PROVIDER_URL, "t3://127.0.0.1:7101"); return new InitialContext( env ); } } But when I try to run this, I get the following error on the line with adminManager.addAdmin (that is, after the lookup): javax.ejb.EJBException: EJB Exception: ; nested exception is: Exception [EclipseLink-4002] (Eclipse Persistence Services - 1.0.2 (Build 20081024)): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Internal error: Cannot obtain XAConnection Creation of XAConnection for pool PGY2 failed after waitSecs:30 : java.sql.SQLException: Data Source PGY2 does not exist. Why can't the client find the data source, and how do I make it find it? EDIT: I took a closer look at WebLogic's output during deployment, and I found this. I have no idea what it means, but it may be relevant: <2010.05.20. 0:50:43 CEST> <Error> <Deployer> <BEA-149231> <Unable to set the activation state to true for the application 'PGY2'. weblogic.application.ModuleException: at weblogic.jdbc.module.JDBCModule.activate(JDBCModule.java:349) at weblogic.application.internal.flow.ModuleListenerInvoker.activate(ModuleListenerInvoker.java:107) at weblogic.application.internal.flow.DeploymentCallbackFlow$2.next(DeploymentCallbackFlow.java:411) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:37) at weblogic.application.internal.flow.DeploymentCallbackFlow.activate(DeploymentCallbackFlow.java:74) Truncated. see log file for complete stacktrace weblogic.common.ResourceException: is already bound at weblogic.jdbc.common.internal.RmiDataSource.start(RmiDataSource.java:387) at weblogic.jdbc.common.internal.DataSourceManager.createAndStartDataSource(DataSourceManager.java:136) at weblogic.jdbc.common.internal.DataSourceManager.createAndStartDataSource(DataSourceManager.java:97) at weblogic.jdbc.module.JDBCModule.activate(JDBCModule.java:346) at weblogic.application.internal.flow.ModuleListenerInvoker.activate(ModuleListenerInvoker.java:107) Truncated. see log file for complete stacktrace >

    Read the article

  • Uses of a C++ Arithmetic Promotion Header

    - by OlduvaiHand
    I've been playing around with a set of templates for determining the correct promotion type given two primitive types in C++. The idea is that if you define a custom numeric template, you could use these to determine the return type of, say, the operator+ function based on the class passed to the templates. For example: // Custom numeric class template <class T> struct Complex { Complex(T real, T imag) : r(real), i(imag) {} T r, i; // Other implementation stuff }; // Generic arithmetic promotion template template <class T, class U> struct ArithmeticPromotion { typedef typename X type; // I realize this is incorrect, but the point is it would // figure out what X would be via trait testing, etc }; // Specialization of arithmetic promotion template template <> class ArithmeticPromotion<long long, unsigned long> { typedef typename unsigned long long type; } // Arithmetic promotion template actually being used template <class T, class U> Complex<typename ArithmeticPromotion<T, U>::type> operator+ (Complex<T>& lhs, Complex<U>& rhs) { return Complex<typename ArithmeticPromotion<T, U>::type>(lhs.r + rhs.r, lhs.i + rhs.i); } If you use these promotion templates, you can more or less treat your user defined types as if they're primitives with the same promotion rules being applied to them. So, I guess the question I have is would this be something that could be useful? And if so, what sorts of common tasks would you want templated out for ease of use? I'm working on the assumption that just having the promotion templates alone would be insufficient for practical adoption. Incidentally, Boost has something similar in its math/tools/promotion header, but it's really more for getting values ready to be passed to the standard C math functions (that expect either 2 ints or 2 doubles) and bypasses all of the integral types. Is something that simple preferable to having complete control over how your objects are being converted? TL;DR: What sorts of helper templates would you expect to find in an arithmetic promotion header beyond the machinery that does the promotion itself?

    Read the article

  • Model-View-Controller in JavaScript

    - by Casey Hope
    tl;dr: How does one implement MVC in JavaScript in a clean way? I'm trying to implement MVC in JavaScript. I have googled and reorganized with my code countless times but have not found a suitable solution. (The code just doesn't "feel right".) Here's how I'm going about it right now. It's incredibly complicated and is a pain to work with (but still better than the pile of code I had before). It has ugly workarounds that sort of defeat the purpose of MVC. And behold, the mess, if you're really brave: // Create a "main model" var main = Model0(); function Model0() { // Create an associated view and store its methods in "view" var view = View0(); // Create a submodel and pass it a function // that will "subviewify" the submodel's view var model1 = Model1(function (subview) { view.subviewify(subview); }); // Return model methods that can be used by // the controller (the onchange handlers) return { 'updateModel1': function (newValue) { model1.update(newValue); } }; } function Model1(makeSubView) { var info = ''; // Make an associated view and attach the view // to the parent view using the passed function var view = View1(); makeSubView(view.__view); // Dirty dirty // Return model methods that can be used by // the parent model (and so the controller) return { 'update': function (newValue) { info = newValue; // Notify the view of the new information view.events.value(info); } }; } function View0() { var thing = document.getElementById('theDiv'); var input = document.getElementById('theInput'); // This is the "controller", bear with me input.onchange = function () { // Ugly, uses a global to contact the model main.updateModel1(this.value); }; return { 'events': {}, // Adds a subview to this view. 'subviewify': function (subview) { thing.appendChild(subview); } }; } // This is a subview. function View1() { var element = document.createElement('div'); return { 'events': { // When the value changes this is // called so the view can be updated 'value': function (newValue) { element.innerHTML = newValue; } }, // ..Expose the DOM representation of the subview // so it can be attached to a parent view '__view': element }; } How does one implement MVC in JavaScript in a cleaner way? How can I improve this system? Or is this the completely wrong way to go, should I follow another pattern?

    Read the article

  • Delaying emails in PHP to avoid exceeding server limit

    - by Andrew P.
    Okay, so here's my problem: I have a list of members on a website, and periodically one of the admins my site (who are not very web or tech savvy) will send a newsletter to the memberlist. My current memberlist is well over 800 individuals long. So, I wrote an email script that sends the email to the full memberlist, with the members listed in the Bcc header. However, I've discovered that my host server has a limit of 300 emails per hour, which I apparently exceed even though the members are listed in the Bcc field. (I wasn't previously aware that the behaviour of Bcc was to send separate emails for each name on the list...) After some thought, I've come to the conclusion that my only solution is to have my script send only the email to only the first 300 emails, wait an hour, and send a second email to the next three hundred, wait another hour, and so on until I've sent the email to the whole member list. Looking around on the internet, I've seen some other solutions people have come up with for delaying emails in PHP. Sleep() is obviously not an option, because I can't just leave the script open and running for 3 or four hours. I've seen some people suggest cron jobs, but I'm not sure how feasible it would be to create three new cron jobs every time I send an email, use them once, and then delete them afterward. The final (and what I think is the smartest) solution I've seen, is to have a table in my database to temporarily store the emails to be delayed and sent later, and then create a cron job that checks this sql table every hour or so, compares the timestamp of the row to the current timestamp, and then sends the email if an hour has passed. So I'm asking you all which method you would recommend. Is there an easier solution that I've completely looked over (aside from getting a different hosting plan. ha!), or is there a cleaner way to do it than the database / cron job approach? tl;dr: I have 800 emails to send in an hour on a server that limits me to 300/hr. Using PHP, find a way to get around this problem in a way that the person sending the email needs only to click "send."

    Read the article

  • HttpClient response handler always returns closed stream

    - by Alex Ciminian
    I'm new to Java development so please bear with me. Also, I hope I'm not the champion of tl;dr :). I'm using HttpClient to make requests over Http (duh!) and I'd gotten it to work for a simple servlet that receives an URL as a query string parameter. I realized that my code could use some refactoring, so I decided to make my own HttpResponseHandler, to clean up the code, make it reusable and improve exception handling. I currently have something like this: public class HttpResponseHandler implements ResponseHandler<InputStream>{ public InputStream handleResponse(HttpResponse response) throws ClientProtocolException, IOException { int statusCode = response.getStatusLine().getStatusCode(); InputStream in = null; if (statusCode != HttpStatus.SC_OK) { throw new HttpResponseException(statusCode, null); } else { HttpEntity entity = response.getEntity(); if (entity != null) { in = entity.getContent(); // This works // for (int i;(i = in.read()) >= 0;) System.out.print((char)i); } } return in; } } And in the method where I make the actual request: HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(target); ResponseHandler<InputStream> httpResponseHandler = new HttpResponseHandler(); try { InputStream in = httpclient.execute(httpget, httpResponseHandler); // This doesn't work // for (int i;(i = in.read()) >= 0;) System.out.print((char)i); return in; } catch (HttpResponseException e) { throw new HttpResponseException(e.getStatusCode(), null); } The problem is that the input stream returned from the handler is closed. I don't have any idea why, but I've checked it with the prints in my code (and no, I haven't used them both at the same time :). While the first print works, the other one gives a closed stream error. I need InputStreams, because all my other methods expect an InputStream and not a String. Also, I want to be able to retrieve images (or maybe other types of files), not just text files. I can work around this pretty easily by giving up on the response handler (I have a working implementation that doesn't use it), but I'm pretty curious about the following: Why does it do what it does? How do I open the stream, if something closes it? What's the right way to do this, anyway :)? I've checked the docs and I couldn't find anything useful regarding this issue. To save you a bit of Googling, here's the Javadoc and here's the HttpClient tutorial (Section 1.1.8 - Response handlers). Thanks, Alex

    Read the article

  • Switch case assembly level code

    - by puffadder
    Hi All, I am programming C on cygwin windows. After having done a bit of C programming and getting comfortable with the language, I wanted to look under the hood and see what the compiler is doing for the code that I write. So I wrote down a code block containing switch case statements and converted them into assembly using: gcc -S foo.c Here is the C source: switch(i) { case 1: { printf("Case 1\n"); break; } case 2: { printf("Case 2\n"); break; } case 3: { printf("Case 3\n"); break; } case 4: { printf("Case 4\n"); break; } case 5: { printf("Case 5\n"); break; } case 6: { printf("Case 6\n"); break; } case 7: { printf("Case 7\n"); break; } case 8: { printf("Case 8\n"); break; } case 9: { printf("Case 9\n"); break; } case 10: { printf("Case 10\n"); break; } default: { printf("Nothing\n"); break; } } Now the resultant assembly for the same is: movl $5, -4(%ebp) cmpl $10, -4(%ebp) ja L13 movl -4(%ebp), %eax sall $2, %eax movl L14(%eax), %eax jmp *%eax .section .rdata,"dr" .align 4 L14: .long L13 .long L3 .long L4 .long L5 .long L6 .long L7 .long L8 .long L9 .long L10 .long L11 .long L12 .text L3: movl $LC0, (%esp) call _printf jmp L2 L4: movl $LC1, (%esp) call _printf jmp L2 L5: movl $LC2, (%esp) call _printf jmp L2 L6: movl $LC3, (%esp) call _printf jmp L2 L7: movl $LC4, (%esp) call _printf jmp L2 L8: movl $LC5, (%esp) call _printf jmp L2 L9: movl $LC6, (%esp) call _printf jmp L2 L10: movl $LC7, (%esp) call _printf jmp L2 L11: movl $LC8, (%esp) call _printf jmp L2 L12: movl $LC9, (%esp) call _printf jmp L2 L13: movl $LC10, (%esp) call _printf L2: Now, in the assembly, the code is first checking the last case (i.e. case 10) first. This is very strange. And then it is copying 'i' into 'eax' and doing things that are beyond me. I have heard that the compiler implements some jump table for switch..case. Is it what this code is doing? Or what is it doing and why? Because in case of less number of cases, the code is pretty similar to that generated for if...else ladder, but when number of cases increases, this unusual-looking implementation is seen. Thanks in advance.

    Read the article

  • Approaches for Content-based Item Recommendations

    - by PartlyCloudy
    Hello, I'm currently developing an application where I want to group similar items. Items (like videos) can be created by users and also their attributes can be altered or extended later (like new tags). Instead of relying on users' preferences as most collaborative filtering mechanisms do, I want to compare item similarity based on the items' attributes (like similar length, similar colors, similar set of tags, etc.). The computation is necessary for two main purposes: Suggesting x similar items for a given item and for clustering into groups of similar items. My application so far is follows an asynchronous design and I want to decouple this clustering component as far as possible. The creation of new items or the addition of new attributes for an existing item will be advertised by publishing events the component can then consume. Computations can be provided best-effort and "snapshotted", which means that I'm okay with the best result possible at a given point in time, although result quality will eventually increase. So I am now searching for appropriate algorithms to compute both similar items and clusters. At important constraint is scalability. Initially the application has to handle a few thousand items, but later million items might be possible as well. Of course, computations will then be executed on additional nodes, but the algorithm itself should scale. It would also be nice if the algorithm supports some kind of incremental mode on partial changes of the data. My initial thought of comparing each item with each other and storing the numerical similarity sounds a little bit crude. Also, it requires n*(n-1)/2 entries for storing all similarities and any change or new item will eventually cause n similarity computations. Thanks in advance! UPDATE tl;dr To clarify what I want, here is my targeted scenario: User generate entries (think of documents) User edit entry meta data (think of tags) And here is what my system should provide: List of similar entries to a given item as recommendation Clusters of similar entries Both calculations should be based on: The meta data/attributes of entries (i.e. usage of similar tags) Thus, the distance of two entries using appropriate metrics NOT based on user votings, preferences or actions (unlike collaborative filtering). Although users may create entries and change attributes, the computation should only take into account the items and their attributes, and not the users associated with (just like a system where only items and no users exist). Ideally, the algorithm should support: permanent changes of attributes of an entry incrementally compute similar entries/clusters on changes scale something better than a simple distance table, if possible (because of the O(n²) space complexity)

    Read the article

  • NVIDIA graphics driver in Ubuntu 12.04

    - by user924501
    So my overall goal is that I want to be able to code with CUDA enabled applications. However, upon many days of searching, using installation walkthroughs, and reinstalling countless times after driver failure... I'm now here as a last resort. I cannot get Ubuntu 12.04 LTS to install the NVIDIA 295.59 driver for my GeForce GT 540M NVIDIA graphics card. My main system specs is as follows... (I believe having the Intel processor may be the problem) DELL Laptop XPS L502X Intel® Core™ i7-2620M CPU @ 2.70GHz × 4 Intel Integrated Graphics 64 bit NVIDIA GeForce GT 540M Ubuntu 12.04 LTS All other specs are irrelevant unless I forgot something? Methods Tried: aptitude install nvidia-current (all packages) Results: Nothing really happened. Nothing in the additional drivers menu appeared, nor was the NVIDIA X Server settings application allowing access because it thought there was no NVIDIA X Server installed. Downloaded driver from nvidia.com. Set nomodeset in the grub boot menu through /boot/grub/grub.cfg Went to console and turned off lightdm. Installed the driver, but it said the pre-install failed? (mean anything?) Started up lightdm again. Results: NVIDIA X Server settings still didn't notice it. Even tried to do nvidia-xconfig multiple times. I also went into the config file to make sure the driver setting said "nvidia". aptitude install nvidia-173 (all packages) Results: Couldn't find the xorg-video-abi-10 virtual package. It was nowhere to be found and the ubuntu forums everywhere had no answers. Lots of people were having this problem. This is easily done in windows, simply download the driver and debug in visual studio with no problems at all. I'd like clear step-by-step instructions on how I should go about this. I'm relatively new to linux but I can find my way around pretty well so you aren't talking to a straight-up beginner. Also, if you think another thread may have the answer please post because I was having a hard time looking for my specific type of problem. TL;DR I want to have access to my GPU so I can code with CUDA while in Ubuntu 12.04 on my 64 bit laptop that also has Intel integrated graphics on the processor. Solution: sudo apt-add-repository ppa:ubuntu-x-swat/x-updates && sudo apt-get update && sudo apt-get upgrade && sudo apt-get install nvidia-current

    Read the article

  • 2 Shaders using the same vertex data

    - by Fonix
    So im having problems rendering using 2 different shaders. Im currently rendering shapes that represent dice, what i want is if the dice is selected by the user, it draws an outline by drawing the dice completely red and slightly scaled up, then render the proper dice over it. At the moment some of the dice, for some reason, render the wrong dice for the outline, but the right one for the proper foreground dice. Im wondering if they aren't getting their vertex data mixed up somehow. Im not sure if doing something like this is even allowed in openGL: glGenBuffers(1, &_vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer); glBufferData(GL_ARRAY_BUFFER, numVertices*sizeof(GLfloat), vertices, GL_STATIC_DRAW); glEnableVertexAttribArray(effect->vertCoord); glVertexAttribPointer(effect->vertCoord, 3, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(effect->toon_vertCoord); glVertexAttribPointer(effect->toon_vertCoord, 3, GL_FLOAT, GL_FALSE, 0, 0); im trying to bind the vertex data to 2 different shaders here when i load my first shader i have: vertCoord = glGetAttribLocation(TexAndLighting, "position"); and the other shader has: toon_vertCoord = glGetAttribLocation(Toon, "position"); if I use the shaders independently of each other they work fine, but when i try to render both one on top of the other they get the model mixed up some times. here is how my draw function looks: - (void) draw { [EAGLContext setCurrentContext:context]; glBindVertexArrayOES(_vertexArray); effect->modelViewMatrix = mvm; effect->numberColour = GLKVector4Make(numbers[colorSelected].r, numbers[colorSelected].g, numbers[colorSelected].b, 1); effect->faceColour = GLKVector4Make(faceColors[colorSelected].r, faceColors[colorSelected].g, faceColors[colorSelected].b, 1); if(selected){ [effect drawOutline]; //this function prepares the shader glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0); } [effect prepareToDraw]; //same with this one glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0); } this is what it looks like, as you can see most of the outlines are using the wrong dice, or none at all: links to full code: http://pastebin.com/yDKb3wrD Dice.mm //rendering stuff http://pastebin.com/eBK0pzrK Effects.mm //shader stuff http://pastebin.com/5LtDAk8J //my shaders, shouldn't be anything to do with them though TL;DR: trying to use 2 different shaders that use the same vertex data, but its getting the models mixed up when rendering using both at the same time, well thats what i think is going wrong, quite stumped actually.

    Read the article

  • How can I map a String to a function in Java?

    - by Bears will eat you
    Currently, I have a bunch of Java classes that implement a Processor interface, meaning they all have a processRequest(String key) method. The idea is that each class has a few (say, <10) member Strings, and each of those maps to a method in that class via the processRequest method, like so: class FooProcessor implements Processor { String key1 = "abc"; String key2 = "def"; String key3 = "ghi"; // and so on... String processRequest(String key) { String toReturn = null; if (key1.equals(key)) toReturn = method1(); else if (key2.equals(key)) toReturn = method2(); else if (key3.equals(key)) toReturn = method3(); // and so on... return toReturn; } String method1() { // do stuff } String method2() { // do other stuff } String method3() { // do other other stuff } // and so on... } You get the idea. This was working fine for me, but now I need a runtime-accessible mapping from key to function; not every function actually returns a String (some return void) and I need to dynamically access the return type (using reflection) of each function in each class that there's a key for. I already have a manager that knows about all the keys, but not the mapping from key to function. My first instinct was to replace this mapping using if-else statements with a Map<String, Function>, like I could do in Javascript. But, Java doesn't support first-class functions so I'm out of luck there. I could probably dig up a third-party library that lets me work with first-class functions, but I haven't seen any yet, and I doubt that I need an entire new library. I also thought of putting these String keys into an array and using reflection to invoke the methods by name, but I see two downsides to this method: My keys would have to be named the same as the method - or be named in a particular, consistent way so that it's easy to map them to the method name. This seems WAY slower than the if-else statements I have right now. Efficiency is something of a concern because these methods will tend to get called pretty frequently, and I want to minimize unnecessary overhead. TL; DR: I'm looking for a clean, minimal-overhead way to map a String to some sort of a Function object that I can invoke and call (something like) getReturnType() on. I don't especially mind using a 3rd-party library if it really fits my needs. I also don't mind using reflection, though I would strongly prefer to avoid using reflection every single time I do a method lookup - maybe using some caching strategy that combines the Map with reflection. Thoughts on a good way to get what I want? Cheers!

    Read the article

  • Scan a Windows PC for Viruses from a Ubuntu Live CD

    - by Trevor Bekolay
    Getting a virus is bad. Getting a virus that causes your computer to crash when you reboot is even worse. We’ll show you how to clean viruses from your computer even if you can’t boot into Windows by using a virus scanner in a Ubuntu Live CD. There are a number of virus scanners available for Ubuntu, but we’ve found that avast! is the best choice, with great detection rates and usability. Unfortunately, avast! does not have a proper 64-bit version, and forcing the install does not work properly. If you want to use avast! to scan for viruses, then ensure that you have a 32-bit Ubuntu Live CD. If you currently have a 64-bit Ubuntu Live CD on a bootable flash drive, it does not take long to wipe your flash drive and go through our guide again and select normal (32-bit) Ubuntu 9.10 instead of the x64 edition. For the purposes of fixing your Windows installation, the 64-bit Live CD will not provide any benefits. Once Ubuntu 9.10 boots up, open up Firefox by clicking on its icon in the top panel. Navigate to http://www.avast.com/linux-home-edition. Click on the Download tab, and then click on the link to download the DEB package. Save it to the default location. While avast! is downloading, click on the link to the registration form on the download page. Fill in the registration form if you do not already have a trial license for avast!. By the time you’ve filled out the registration form, avast! will hopefully be finished downloading. Open a terminal window by clicking on Applications in the top-left corner of the screen, then expanding the Accessories menu and clicking on Terminal. In the terminal window, type in the following commands, pressing enter after each line. cd Downloadssudo dpkg –i avast* This will install avast! on the live Ubuntu environment. To ensure that you can use the latest virus database, while still in the terminal window, type in the following command: sudo sysctl –w kernel.shmmax=128000000 Now we’re ready to open avast!. Click on Applications on the top-left corner of the screen, expand the Accessories folder, and click on the new avast! Antivirus item. You will first be greeted with a window that asks for your license key. Hopefully you’ve received it in your email by now; open the email that avast! sends you, copy the license key, and paste it in the Registration window. avast! Antivirus will open. You’ll notice that the virus database is outdated. Click on the Update database button and avast! will start downloading the latest virus database. To scan your Windows hard drive, you will need to “mount” it. While the virus database is downloading, click on Places on the top-left of your screen, and click on your Windows hard drive, if you can tell which one it is by its size. If you can’t tell which is the correct hard drive, then click on Computer and check out each hard drive until you find the right one. When you find it, make a note of the drive’s label, which appears in the menu bar of the file browser. Also note that your hard drive will now appear on your desktop. By now, your virus database should be updated. At the time this article was written, the most recent version was 100404-0. In the main avast! window, click on the radio button next to Selected folders and then click on the “+” button to the right of the list box. It will open up a dialog box to browse to a location. To find your Windows hard drive, click on the “>” next to the computer icon. In the expanded list, find the folder labelled “media” and click on the “>” next to it to expand it. In this list, you should be able to find the label that corresponds to your Windows hard drive. If you want to scan a certain folder, then you can go further into this hierarchy and select that folder. However, we will scan the entire hard drive, so we’ll just press OK. Click on Start scan and avast! will start scanning your hard drive. If a virus is found, you’ll be prompted to select an action. If you know that the file is a virus, then you can Delete it, but there is the possibility of false positives, so you can also choose Move to chest to quarantine it. When avast! is done scanning, it will summarize what it found on your hard drive. You can take different actions on those files at this time by right-clicking on them and selecting the appropriate action. When you’re done, click Close. Your Windows PC is now free of viruses, in the eyes of avast!. Reboot your computer and with any luck it will now boot up! Alternatives to avast! If avast! and a liberal amount of Googling doesn’t fix your problem, it’s possible that a different virus scanner will fix your obscure issue. Here are a list of other virus scanners available for Ubuntu that are either free or offer free trials. See their support forums for help on installing these virus scanners. Avira AntiVir Personal for Linux / Solaris Panda Antivirus for Linux Installation and usage guide from Ubuntu F-PROT Antivirus for Linux ClamAV installation and usage guide from Ubuntu NOD32 Antivirus for Linux Kaspersky Anti-Virus 2010 Bitdefender Antivirus for Unices Conclusion Running avast! from a Ubuntu Live CD can clean the vast majority of viruses from your Windows PC. This is another reason to always have a Ubuntu Live CD ready just in case something happens to your Windows installation! Similar Articles Productive Geek Tips Secure Computing: Windows Live OneCareHow To Remove Antivirus Live and Other Rogue/Fake Antivirus MalwareUse the Windows Key for the "Start" Menu in Ubuntu LinuxScan Files for Viruses Before You Download With Dr.WebAsk the Readers: Share Your Tips for Defeating Viruses and Malware TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC

    Read the article

  • Our Look at Opera 10.50 Web Browser

    - by Asian Angel
    Everyone has been talking about the newest version of Opera recently but perhaps you have not looked at it too closely yet. Today we will take a look at 10.50 and let you see what this “new browser” is all about. The New Engines Carakan JavaScript Engine: Runs web applications up to 7 times faster than its predecessor Futhark Vega Graphics Library: Enables super fast and smooth graphics on everything from tab switching to webpage animation Presto 2.5: Provides support for HTML5, CSS2.1 and the latest CSS3 standards A Look at the Features Available If you have installed or used older versions of Opera before then the default look after a clean install will probably seem rather different. The main differences in appearance are mainly located within the “glass border” areas of the browser. The “Speed Dial” setup looks and works just as well as in previous versions. You can set a favorite wallpaper or image as your background and choose the number of “dials” using the “Configure Speed Dial Command”. One of the “standout” differences is the “O Button”. All of the menus have been condensed into this single access point but it only takes a few moments to find what you are looking for. If you have used the style before in earlier versions of Opera some of the items have been moved around. For those who prefer the “Menu Bar” that can be easily restored using the “Show Menu Bar Command”. If desired you can actually “extend” the “Tab Bar” downwards to display thumbnails of your open tabs. Just use your mouse to grab the bottom of the “Tab Bar” and adjust it to suit your personal needs. The only problem with this feature is that it will quickly use up a good sized portion of your available UI and browser window space. The “Password Manager” is ready to access when needed…the background for the button will turn a shiny metallic blue when you open a webpage that you have “Login Information” saved for. One of the new features is a small “Recycle Bin Button” in the upper right corner. Clicking on this will display a list of recently closed tabs letting you have easy access to any tabs that you may have accidentally closed. This is definitely a great feature to have as an easy access button. For those who were used to how the “Zoom Feature” looked before it has a new “look” to it. Instead of the pop-up menu-type listing of “view sizes” present before you now have a slider button that you can use to adjust the zooming level. For our default setup here the “Sidebar Panels” available were: “Bookmarks, Widgets, Unite, Notes, Downloads, History, & Panels”. Additional panels such as “Links, Windows, Search, Info, etc.” are available if you want and/or need them (accessible using the “Panels Plus Sign Button”). The “Opera Link Button” makes it easy for you to synchronize your “Speed Dial, Bookmarks, Personal Bar, Custom Searches, History & Notes”. Note: “Opera Link” requires an account and can be signed up for using the link provided below. Want to share files with your family and friends? “Unite” allows you to do that and more. With “Unite” you can: “Stream Music, Show Photo Galleries, Share Files and/or Folders, & host webpages directly from your browser”. We have a more in-depth look at “Unite” in our article here. Note: Use of “Unite” requires an Opera account. Got a slow internet connection? “Opera Turbo” can help with that by running the web traffic through their “compression servers” to speed up your web browsing. Keep in mind that “Opera Turbo” will not engage if you are accessing a secure website (i.e. your bank’s website) thus preserving your security. Note: “Opera Turbo” can be set up to automatically detect slow internet connections (i.e. crowded Wi-Fi in a cafe). Opera has a built-in “Private Browsing Mode” now for those who prefer anonymous browsing and want to keep the “history records clean” on their computer. To access it go to “Tabs and windows” and select “New private tab” or “New private window” as desired. When you open your new “Private Tab or Window” you will see the following message with details on how Opera will handle browsing information and a large “door hanger symbol”. Notice that the one tab is locked into “Private Browsing Mode” while the others are still working in “Regular Browsing Mode”. Very nice! A miniature version of the “door hanger symbol” will be present on any tab that is locked into “Private Browsing Mode”. If you are using Windows 7 then you will love how things look from your “Taskbar”. Here you can see four very nice looking thumbnails for the tabs that we had open. All that you have to do is click on the desired thumbnail… The “Context Menu” looks just as lovely as the thumbnails and definitely has some terrific functionality built into it. Add Enhanced Aero Capability If you love “Aero” and want more for your new Opera install then we have the perfect theme for you. The theme’s name is Z1-AV69 and once you have downloaded it you will need to place it in the “Skins Subfolder” in Opera’s “Program Files Folder”. Note: For our example we used version 1.10 but version 2.00 is now available (link provided below). Once you have restarted Opera, go to the “O Menu” and select “Appearance”. When the “Appearance Window” opens click on “Z1-Glass Skin” and then click “OK”. All of a sudden you will have more “Aero Goodness” to enjoy. Compare this screenshot with the one at the top of this article…the only part that is not transparent now is the browser window area itself. Want even more “Aero Goodness”? Right click on the “Tab Bar” and set “Tab Bar Placement” to “Left”. Note: You can achieve the same effect by setting the “Tab Bar Placement” to “Right”. With the “Speed Dial” visible you will be able to see your wallpaper with ease. While this is obviously not for everyone it does make for a great visual trick. Portable Versions Perhaps you need this wonderful new version of Opera to go with you wherever you do during the day. Not a problem…just visit the Opera USB website to choose a version that works best for you. You can select from “Zip or Exe” setup files and if needed update an older portable version using a “Zipped Update Files Package”. If you are updating an older version keep in mind that you will need to delete the old “OperaUSB.exe. File” due to changes with the new setup files. During our tests updating older portable versions went well for the most part but we did experience a few “odd UI quirks” here and there…so we recommend setting up a clean install if possible. Conclusion The new 10.50 release is a pleasure to use and is a recommended install for your system. Whether you are considering trying Opera for the first time or have been using it for a bit we think that you will pleased with everything that the 10.50 release has to offer. For those who would like to add User Scripts to Opera be certain to look at our how-to article here. Links Download Opera 10.50 for your location (Windows) Get the latest Snapshot versions for Linux & Mac Sign up for an Opera Link account View In-Depth detail on Opera 10.50’s features Download the Z1-AV69 Aero Theme Download Portable Opera 10.50 Similar Articles Productive Geek Tips Set the Speed Dial as the Opera Startup PageSet Up User Scripts in Opera BrowserScan Files for Viruses Before You Download With Dr.WebTurn Your Computer into a File, Music, and Web Server with Opera UniteSet the Default Browser on Ubuntu From the Command Line TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

  • Introducing Oracle VM Server for SPARC

    - by Honglin Su
    As you are watching Oracle's Virtualization Strategy Webcast and exploring the great virtualization offerings of Oracle VM product line, I'd like to introduce Oracle VM Server for SPARC --  highly efficient, enterprise-class virtualization solution for Sun SPARC Enterprise Systems with Chip Multithreading (CMT) technology. Oracle VM Server for SPARC, previously called Sun Logical Domains, leverages the built-in SPARC hypervisor to subdivide supported platforms' resources (CPUs, memory, network, and storage) by creating partitions called logical (or virtual) domains. Each logical domain can run an independent operating system. Oracle VM Server for SPARC provides the flexibility to deploy multiple Oracle Solaris operating systems simultaneously on a single platform. Oracle VM Server also allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by the CMT architecture. Oracle VM Server for SPARC integrates both the industry-leading CMT capability of the UltraSPARC T1, T2 and T2 Plus processors and the Oracle Solaris operating system. This combination helps to increase flexibility, isolate workload processing, and improve the potential for maximum server utilization. Oracle VM Server for SPARC delivers the following: Leading Price/Performance - The low-overhead architecture provides scalable performance under increasing workloads without additional license cost. This enables you to meet the most aggressive price/performance requirement Advanced RAS - Each logical domain is an entirely independent virtual machine with its own OS. It supports virtual disk mutipathing and failover as well as faster network failover with link-based IP multipathing (IPMP) support. Moreover, it's fully integrated with Solaris FMA (Fault Management Architecture), which enables predictive self healing. CPU Dynamic Resource Management (DRM) - Enable your resource management policy and domain workload to trigger the automatic addition and removal of CPUs. This ability helps you to better align with your IT and business priorities. Enhanced Domain Migrations - Perform domain migrations interactively and non-interactively to bring more flexibility to the management of your virtualized environment. Improve active domain migration performance by compressing memory transfers and taking advantage of cryptographic acceleration hardware. These methods provide faster migration for load balancing, power saving, and planned maintenance. Dynamic Crypto Control - Dynamically add and remove cryptographic units (aka MAU) to and from active domains. Also, migrate active domains that have cryptographic units. Physical-to-virtual (P2V) Conversion - Quickly convert an existing SPARC server running the Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle Solaris 10 image. Use this image to facilitate OS migration into the virtualized environment. Virtual I/O Dynamic Reconfiguration (DR) - Add and remove virtual I/O services and devices without needing to reboot the system. CPU Power Management - Implement power saving by disabling each core on a Sun UltraSPARC T2 or T2 Plus processor that has all of its CPU threads idle. Advanced Network Configuration - Configure the following network features to obtain more flexible network configurations, higher performance, and scalability: Jumbo frames, VLANs, virtual switches for link aggregations, and network interface unit (NIU) hybrid I/O. Official Certification Based On Real-World Testing - Use Oracle VM Server for SPARC with the most sophisticated enterprise workloads under real-world conditions, including Oracle Real Application Clusters (RAC). Affordable, Full-Stack Enterprise Class Support - Obtain worldwide support from Oracle for the entire virtualization environment and workloads together. The support covers hardware, firmware, OS, virtualization, and the software stack. SPARC Server Virtualization Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation. Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single Sun SPARC T-Series server with finer granularity for computing resources.  For SPARC CMT processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is naturally built into the CPU for lower overhead and higher performance. Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Sun SPARC Enterprise systems with CMT technology to deliver a more agile, responsive, and low-cost environment. Management with Oracle Enterprise Manager Ops Center The Oracle Enterprise Manager Ops Center Virtualization Management Pack provides full lifecycle management of virtual guests, including Oracle VM Server for SPARC and Oracle Solaris Containers. It helps you streamline operations and reduce downtime. Together, the Virtualization Management Pack and the Ops Center Provisioning and Patch Automation Pack provide an end-to-end management solution for physical and virtual systems through a single web-based console. This solution automates the lifecycle management of physical and virtual systems and is the most effective systems management solution for Oracle's Sun infrastructure. Ease of Deployment with Configuration Assistant The Oracle VM Server for SPARC Configuration Assistant can help you easily create logical domains. After gathering the configuration data, the Configuration Assistant determines the best way to create a deployment to suit your requirements. The Configuration Assistant is available as both a graphical user interface (GUI) and terminal-based tool. Oracle Solaris Cluster HA Support The Oracle Solaris Cluster HA for Oracle VM Server for SPARC data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the Oracle VM Server guest domain service. In addition, applications that run on a logical domain, as well as its resources and dependencies can be controlled and managed independently. These are managed as if they were running in a classical Solaris Cluster hardware node. Supported Systems Oracle VM Server for SPARC is supported on all Sun SPARC Enterprise Systems with CMT technology. UltraSPARC T2 Plus Systems ·   Sun SPARC Enterprise T5140 Server ·   Sun SPARC Enterprise T5240 Server ·   Sun SPARC Enterprise T5440 Server ·   Sun Netra T5440 Server ·   Sun Blade T6340 Server Module ·   Sun Netra T6340 Server Module UltraSPARC T2 Systems ·   Sun SPARC Enterprise T5120 Server ·   Sun SPARC Enterprise T5220 Server ·   Sun Netra T5220 Server ·   Sun Blade T6320 Server Module ·   Sun Netra CP3260 ATCA Blade Server Note that UltraSPARC T1 systems are supported on earlier versions of the software.Sun SPARC Enterprise Systems with CMT technology come with the right to use (RTU) of Oracle VM Server, and the software is pre-installed. If you have the systems under warranty or with support, you can download the software and system firmware as well as their updates. Oracle Premier Support for Systems provides fully-integrated support for your server hardware, firmware, OS, and virtualization software. Visit oracle.com/support for information about Oracle's support offerings for Sun systems. For more information about Oracle's virtualization offerings, visit oracle.com/virtualization.

    Read the article

  • Create PDF document using iTextSharp in ASP.Net 4.0 and MemoryMappedFile

    - by sreejukg
    In this article I am going to demonstrate how ASP.Net developers can programmatically create PDF documents using iTextSharp. iTextSharp is a software component, that allows developers to programmatically create or manipulate PDF documents. Also this article discusses the process of creating in-memory file, read/write data from/to the in-memory file utilizing the new feature MemoryMappedFile. I have a database of users, where I need to send a notice to all my users as a PDF document. The sending mail part of it is not covered in this article. The PDF document will contain the company letter head, to make it more official. I have a list of users stored in a database table named “tblusers”. For each user I need to send customized message addressed to them personally. The database structure for the users is give below. id Title Full Name 1 Mr. Sreeju Nair K. G. 2 Dr. Alberto Mathews 3 Prof. Venketachalam Now I am going to generate the pdf document that contains some message to the user, in the following format. Dear <Title> <FullName>, The message for the user. Regards, Administrator Also I have an image, bg.jpg that contains the background for the document generated. I have created .Net 4.0 empty web application project named “iTextSharpSample”. First thing I need to do is to download the iTextSharp dll from the source forge. You can find the url for the download here. http://sourceforge.net/projects/itextsharp/files/ I have extracted the Zip file and added the itextsharp.dll as a reference to my project. Also I have added a web form named default.aspx to my project. After doing all this, the solution explorer have the following view. In the default.aspx page, I inserted one grid view and associated it with a SQL Data source control that bind data from tblusers. I have added a button column in the grid view with text “generate pdf”. The output of the page in the browser is as follows. Now I am going to create a pdf document when the user clicking on the Generate PDF button. As I mentioned before, I am going to work with the file in memory, I am not going to create a file in the disk. I added an event handler for button by specifying onrowcommand event handler. My gridview source looks like <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1" Width="481px" CellPadding="4" ForeColor="#333333" GridLines="None" onrowcommand="Generate_PDF" > ………………………………………………………………………….. ………………………………………………………………………….. </asp:GridView> In the code behind, I wrote the corresponding event handler. protected void Generate_PDF(object sender, GridViewCommandEventArgs e) { // The button click event handler code. // I am going to explain the code for this section in the remaining part of the article } The Generate_PDF method is straight forward, It get the title, fullname and message to some variables, then create the pdf using these variables. The code for getting data from the grid view is as follows // get the row index stored in the CommandArgument property int index = Convert.ToInt32(e.CommandArgument); // get the GridViewRow where the command is raised GridViewRow selectedRow = ((GridView)e.CommandSource).Rows[index]; string title = selectedRow.Cells[1].Text; string fullname = selectedRow.Cells[2].Text; string msg = @"There are some changes in the company policy, due to this matter you need to submit your latest address to us. Please update your contact details / personnal details by visiting the member area of the website. ................................... "; since I don’t want to save the file in the disk, I am going the new feature introduced in .Net framework 4, called Memory-Mapped Files. Using Memory-Mapped mapped file, you can created non-persisted memory mapped files, that are not associated with a file in a disk. So I am going to create a temporary file in memory, add the pdf content to it, then write it to the output stream. To read more about MemoryMappedFile, read this msdn article http://msdn.microsoft.com/en-us/library/dd997372.aspx The below portion of the code using MemoryMappedFile object to create a test pdf document in memory and perform read/write operation on file. The CreateViewStream() object will give you a stream that can be used to read or write data to/from file. The code is very straight forward and I included comment so that you can understand the code. using (MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test1.pdf", 1000000)) { // Create a new pdf document object using the constructor. The parameters passed are document size, left margin, right margin, top margin and bottom margin. iTextSharp.text.Document d = new iTextSharp.text.Document(PageSize.A4, 72,72,172,72); //get an instance of the memory mapped file to stream object so that user can write to this using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { // associate the document to the stream. PdfWriter.GetInstance(d, stream); /* add an image as bg*/ iTextSharp.text.Image jpg = iTextSharp.text.Image.GetInstance(Server.MapPath("Image/bg.png")); jpg.Alignment = iTextSharp.text.Image.UNDERLYING; jpg.SetAbsolutePosition(0, 0); //this is the size of my background letter head image. the size is in points. this will fit to A4 size document. jpg.ScaleToFit(595, 842); d.Open(); d.Add(jpg); d.Add(new Paragraph(String.Format("Dear {0} {1},", title, fullname))); d.Add(new Paragraph("\n")); d.Add(new Paragraph(msg)); d.Add(new Paragraph("\n")); d.Add(new Paragraph(String.Format("Administrator"))); d.Close(); } //read the file data byte[] b; using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { BinaryReader rdr = new BinaryReader(stream); b = new byte[mmf.CreateViewStream().Length]; rdr.Read(b, 0, (int)mmf.CreateViewStream().Length); } Response.Clear(); Response.ContentType = "Application/pdf"; Response.BinaryWrite(b); Response.End(); } Press ctrl + f5 to run the application. First I got the user list. Click on the generate pdf icon. The created looks as follows. Summary: Creating pdf document using iTextSharp is easy. You will get lot of information while surfing the www. Some useful resources and references are mentioned below http://itextsharp.com/ http://www.mikesdotnetting.com/Article/82/iTextSharp-Adding-Text-with-Chunks-Phrases-and-Paragraphs http://somewebguy.wordpress.com/2009/05/08/itextsharp-simplify-your-html-to-pdf-creation/ Hope you enjoyed the article.

    Read the article

  • Underwriting in a New Frontier: Spurring Innovation

    - by [email protected]
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Susan Keuer, product strategy manager for Oracle Insurance, shares her experiences and insight from the 2010 Association of Home Office Underwriters (AHOU) Annual Conference, April 11-14, in San Antonio, Texas    How can I be more innovative in underwriting?  It's a common question I hear from insurance carriers, producers and others, so it was no surprise that it was the key theme at the recent 2010 AHOU Annual Conference.  This year's event drew more than 900 insurance professionals involved in the underwriting process across life and annuities, property and casualty and reinsurance from around the globe, including the U.S., Canada, Australia, Bahamas, and more, to San Antonio - a Texas city where innovation transformed a series of downtown drainage canals into its premiere River Walk tourist destination.   CNN's Medical Correspondent Dr. Sanjay Gupta kicked off the conference with a phenomenal opening session that drove home the theme of the conference, "Underwriting in a New Frontier:  Spurring Innovation."   Drawing from his own experience as a neurosurgeon treating critically injured medical patients in the field in Iraq, Gupta inspired audience members to think outside the box during the underwriting process. He shared a compelling story of operating on a soldier who had suffered a head-related trauma in a field hospital.  With minimal supplies available Gupta used a Black and Decker saw to operate on the soldier's head and reduce pressure on his swelling brain. Drawing from this example, Gupta encouraged underwriters to think creatively, be innovative, and consider new tools and sources of information, such as social networking sites, during the underwriting process. So as you are looking at risk take into consideration all resources you have available.    Gupta also stressed the concept of IKIGAI - noting that individuals who believe that their life is worth living are less likely to die than are their counterparts without this belief.  How does one quantify this approach to life or thought process when evaluating risk?  Could this be something to consider as a "category" in the near future? How can this same belief in your own work spur innovation?   The role of technology was a hot topic of discussion throughout the conference.  Sessions delved into the latest in underwriting software to the rise of social media and how it is being increasingly integrated into underwriting process and solutions.  In one session a trio of panelists representing the carrier, producer and vendor communities stressed the importance to underwriters of leveraging new technology and the plethora of online information sources, which all could be used to accurately, honestly and consistently evaluate the risk throughout the underwriting process.   Another focused on the explosion of social media noting:  1.    Social media is growing exponentially - About eight percent of Americans used social media five years ago. Today about 46 percent of Americans do so, with 85 percent of financial services professionals using social media in their work.  2.    It will impact your business - Underwriters reconfirmed over and over that they are increasingly using "free" tools that are available in cyberspace in lieu of more costly solutions, such as inspection reports conducted by individuals in the field.  3.    Information is instantly available on the Web, anytime, anywhere - LinkedIn was mentioned as a way to connect to peers in the underwriting community and producers alike.  Many carriers and agents also are using Facebook to promote their company to customers - and as a point-of-entry to allow them to perform some functionality - such as accessing product marketing information versus directing users to go to the carrier's own proprietary website.  Other carriers have released their tight brand marketing to allow their producers to drive more business to their personal Facebook site where they offer innovative tools such as Application Capture or asking medical information in a more relaxed fashion.     Other key topics at the conference included the economy, ongoing industry consolidation, real-estate valuations as an asset and input into the underwriting process, and producer trends.  All stressed a "back to basics" approach for low cost, term products.   Finally, Connie Merritt, RN, PHN, entertained the large group of atttendees with audience-engaging insight on how to "Tame the Lions in Your Life - Dealing with Complainers, Bullies, Grump and Curmudgeon." Merritt noted "we are too busy for our own good." She shared how her overachieving personality had impacted her life.  Audience members then were asked to pick red, yellow, blue, or green shapes, without knowing that each one represented a specific personality trait.  For example, those who picked blue were the peacemakers. Those who choose yellow were social - the hint was to "Be Quiet Longer."  She then offered these "lion taming" steps:   1.    Admit It 2.    Accept It 3.    Let Go 4.    Be Present (which paralleled Gupta's IKIGAI concept)   When thinking about underwriting I encourage you to be present in the moment and think creatively, but don't be afraid to look ahead to the future and be an innovator.  I hope to see you at next year's AHOU Annual Conference, May 1-4, 2011 at The Mirage in Las Vegas, Nev.     Susan Keuer is the product strategy manager for new business underwriting.  She brings more than 20 years of insurance industry experience working with leading insurance carriers and technology companies to her role on the product strategy team for life/annuities solutions within the Oracle Insurance Global Business Unit  

    Read the article

  • MySQL Cluster 7.3 Labs Release – Foreign Keys Are In!

    - by Mat Keep
    0 0 1 1097 6254 Homework 52 14 7337 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary (aka TL/DR): Support for Foreign Key constraints has been one of the most requested feature enhancements for MySQL Cluster. We are therefore extremely excited to announce that Foreign Keys are part of the first Labs Release of MySQL Cluster 7.3 – available for download, evaluation and feedback now! (Select the mysql-cluster-7.3-labs-June-2012 build) In this blog, I will attempt to discuss the design rationale, implementation, configuration and steps to get started in evaluating the first MySQL Cluster 7.3 Labs Release. Pace of Innovation It was only a couple of months ago that we announced the General Availability (GA) of MySQL Cluster 7.2, delivering 1 billion Queries per Minute, with 70x higher cross-shard JOIN performance, Memcached NoSQL key-value API and cross-data center replication.  This release has been a huge hit, with downloads and deployments quickly reaching record levels. The announcement of the first MySQL Cluster 7.3 Early Access lab release at today's MySQL Innovation Day event demonstrates the continued pace in Cluster development, and provides an opportunity for the community to evaluate and feedback on new features they want to see. What’s the Plan for MySQL Cluster 7.3? Well, Foreign Keys, as you may have gathered by now (!), and this is the focus of this first Labs Release. As with MySQL Cluster 7.2, we plan to publish a series of preview releases for 7.3 that will incrementally add new candidate features for a final GA release (subject to usual safe harbor statement below*), including: - New NoSQL APIs; - Features to automate the configuration and provisioning of multi-node clusters, on premise or in the cloud; - Performance and scalability enhancements; - Taking advantage of features in the latest MySQL 5.x Server GA. Design Rationale MySQL Cluster is designed as a “Not-Only-SQL” database. It combines attributes that enable users to blend the best of both relational and NoSQL technologies into solutions that deliver web scalability with 99.999% availability and real-time performance, including: Concurrent NoSQL and SQL access to the database; Auto-sharding with simple scale-out across commodity hardware; Multi-master replication with failover and recovery both within and across data centers; Shared-nothing architecture with no single point of failure; Online scaling and schema changes; ACID compliance and support for complex queries, across shards. Native support for Foreign Key constraints enables users to extend the benefits of MySQL Cluster into a broader range of use-cases, including: - Packaged applications in areas such as eCommerce and Web Content Management that prescribe databases with Foreign Key support. - In-house developments benefiting from Foreign Key constraints to simplify data models and eliminate the additional application logic needed to maintain data consistency and integrity between tables. Implementation The Foreign Key functionality is implemented directly within MySQL Cluster’s data nodes, allowing any client API accessing the cluster to benefit from them – whether using SQL or one of the NoSQL interfaces (Memcached, C++, Java, JPA or HTTP/REST.) The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL In addition, the MySQL Cluster implementation supports the online adding and dropping of Foreign Keys, ensuring the Cluster continues to serve both read and write requests during the operation. An important difference to note with the Foreign Key implementation in InnoDB is that MySQL Cluster does not support the updating of Primary Keys from within the Data Nodes themselves - instead the UPDATE is emulated with a DELETE followed by an INSERT operation. Therefore an UPDATE operation will return an error if the parent reference is using a Primary Key, unless using CASCADE action, in which case the delete operation will result in the corresponding rows in the child table being deleted. The Engineering team plans to change this behavior in a subsequent preview release. Also note that when using InnoDB "NO ACTION" is identical to "RESTRICT". In the case of MySQL Cluster “NO ACTION” means “deferred check”, i.e. the constraint is checked before commit, allowing user-defined triggers to automatically make changes in order to satisfy the Foreign Key constraints. Configuration There is nothing special you have to do here – Foreign Key constraint checking is enabled by default. If you intend to migrate existing tables from another database or storage engine, for example from InnoDB, there are a couple of best practices to observe: 1. Analyze the structure of the Foreign Key graph and run the ALTER TABLE ENGINE=NDB in the correct sequence to ensure constraints are enforced 2. Alternatively drop the Foreign Key constraints prior to the import process and then recreate when complete. Getting Started Read this blog for a demonstration of using Foreign Keys with MySQL Cluster.  You can download MySQL Cluster 7.3 Labs Release with Foreign Keys today - (select the mysql-cluster-7.3-labs-June-2012 build) If you are new to MySQL Cluster, the Getting Started guide will walk you through installing an evaluation cluster on a singe host (these guides reflect MySQL Cluster 7.2, but apply equally well to 7.3) Post any questions to the MySQL Cluster forum where our Engineering team will attempt to assist you. Post any bugs you find to the MySQL bug tracking system (select MySQL Cluster from the Category drop-down menu) And if you have any feedback, please post them to the Comments section of this blog. Summary MySQL Cluster 7.2 is the GA, production-ready release of MySQL Cluster. This first Labs Release of MySQL Cluster 7.3 gives you the opportunity to preview and evaluate future developments in the MySQL Cluster database, and we are very excited to be able to share that with you. Let us know how you get along with MySQL Cluster 7.3, and other features that you want to see in future releases. * Safe Harbor Statement This information is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • 6 Facts About GlassFish Announcement

    - by Bruno.Borges
    Since Oracle announced the end of commercial support for future Oracle GlassFish Server versions, the Java EE world has started wondering what will happen to GlassFish Server Open Source Edition. Unfortunately, there's a lot of misleading information going around. So let me clarify some things with facts, not FUD. Fact #1 - GlassFish Open Source Edition is not dead GlassFish Server Open Source Edition will remain the reference implementation of Java EE. The current trunk is where an implementation for Java EE 8 will flourish, and this will become the future GlassFish 5.0. Calling "GlassFish is dead" does no good to the Java EE ecosystem. The GlassFish Community will remain strong towards the future of Java EE. Without revenue-focused mind, this might actually help the GlassFish community to shape the next version, and set free from any ties with commercial decisions. Fact #2 - OGS support is not over As I said before, GlassFish Server Open Source Edition will continue. Main change is that there will be no more future commercial releases of Oracle GlassFish Server. New and existing OGS 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. In parallel, I believe there's no other company in the Java EE business that offers commercial support to more than one build of a Java EE application server. This new direction can actually help customers and partners, simplifying decision through commercial negotiations. Fact #3 - WebLogic is not always more expensive than OGS Oracle GlassFish Server ("OGS") is a build of GlassFish Server Open Source Edition bundled with a set of commercial features called GlassFish Server Control and license bundles such as Java SE Support. OGS has at the moment of this writing the pricelist of U$ 5,000 / processor. One information that some bloggers are mentioning is that WebLogic is more expensive than this. Fact 3.1: it is not necessarily the case. The initial edition of WebLogic is called "Standard Edition" and falls into a policy where some “Standard Edition” products are licensed on a per socket basis. As of current pricelist, US$ 10,000 / socket. If you do the math, you will realize that WebLogic SE can actually be significantly more cost effective than OGS, and a customer can save money if running on a CPU with 4 cores or more for example. Quote from the price list: “When licensing Oracle programs with Standard Edition One or Standard Edition in the product name (with the exception of Java SE Support, Java SE Advanced, and Java SE Suite), a processor is counted equivalent to an occupied socket; however, in the case of multi-chip modules, each chip in the multi-chip module is counted as one occupied socket.” For more details speak to your Oracle sales representative - this is clearly at list price and every customer typically has a relationship with Oracle (like they do with other vendors) and different contractual details may apply. And although OGS has always been production-ready for Java EE applications, it is no secret that WebLogic has always been more enterprise, mission critical application server than OGS since BEA. Different editions of WLS provide features and upgrade irons like the WebLogic Diagnostic Framework, Work Managers, Side by Side Deployment, ADF and TopLink bundled license, Web Tier (Oracle HTTP Server) bundled licensed, Fusion Middleware stack support, Oracle DB integration features, Oracle RAC features (such as GridLink), Coherence Management capabilities, Advanced HA (Whole Service Migration and Server Migration), Java Mission Control, Flight Recorder, Oracle JDK support, etc. Fact #4 - There’s no major vendor supporting community builds of Java EE app servers There are no major vendors providing support for community builds of any Open Source application server. For example, IBM used to provide community support for builds of Apache Geronimo, not anymore. Red Hat does not commercially support builds of WildFly and if I remember correctly, never supported community builds of former JBoss AS. Oracle has never commercially supported GlassFish Server Open Source Edition builds. Tomitribe appears to be the exception to the rule, offering commercial support for Apache TomEE. Fact #5 - WebLogic and GlassFish share several Java EE implementations It has been no secret that although GlassFish and WebLogic share some JSR implementations (as stated in the The Aquarium announcement: JPA, JSF, WebSockets, CDI, Bean Validation, JAX-WS, JAXB, and WS-AT) and WebLogic understands GlassFish deployment descriptors, they are not from the same codebase. Fact #6 - WebLogic is not for GlassFish what JBoss EAP is for WildFly WebLogic is closed-source offering. It is commercialized through a license-based plus support fee model. OGS although from an Open Source code, has had the same commercial model as WebLogic. Still, one cannot compare GlassFish/WebLogic to WildFly/JBoss EAP. It is simply not the same case, since Oracle has had two different products from different codebases. The comparison should be limited to GlassFish Open Source / Oracle GlassFish Server versus WildFly / JBoss EAP. But the message now is much clear: Oracle will commercially support only the proprietary product WebLogic, and invest on GlassFish Server Open Source Edition as the reference implementation for the Java EE platform and future Java EE 8, as a developer-friendly community distribution, and encourages community participation through Adopt a JSR and contributions to GlassFish. In comparison Oracle's decision has pretty much the same goal as to when IBM killed support for Websphere Community Edition; and to when Red Hat decided to change the name of JBoss Community Edition to WildFly, simplifying and clarifying marketing message and leaving the commercial field wide open to JBoss EAP only. Oracle can now, as any other vendor has already been doing, focus on only one commercial offer. Some users are saying they will now move to WildFly, but it is important to note that Red Hat does not offer commercial support for WildFly builds. Although the future JBoss EAP versions will come from the same codebase as WildFly, the builds will definitely not be the same, nor sharing 100% of their functionalities and bug fixes. This means there will be no company running a WildFly build in production with support from Red Hat. This discussion has also raised an important and interesting information: Oracle offers a free for developers OTN License for WebLogic. For other environments this is different, but please note this is the same policy Red Hat applies to JBoss EAP, as stated in their download page and terms. Oracle had the same policy for OGS. TL;DR; GlassFish Server Open Source Edition isn’t dead. Current and new OGS 2.x/3.x customers will continue to have support (respecting LSP). WebLogic is not necessarily more expensive than OGS. Oracle will focus on one commercially supported Java EE application server, like other vendors also limit themselves to support one build/product only. Community builds are hardly supported. Commercially supported builds of Open Source products are not exactly from the same codebase as community builds. What's next for GlassFish and the Java EE community? There are conversations in place to tackle some of the community desires, most of them stated by Markus Eisele in his blog post. We will keep you posted.

    Read the article

  • Emtel Knowledge Series - Q2/2014

    From Cyber Island to Smart Mauritius Cyber Island? Smart Mauritius? - What is Emtel talking about? "With the majority of the population living in urban environments today, the concept of "Smart Cities" has become an urgent necessity. "Smart Cities" refer to an urban transformation which, by using latest ICT technologies makes cities more efficient. Many Governments are setting out ambitious plans to build the cities of the future based on massive connectivity, high bandwidth communications, intelligent sensors and analysis of huge volumes of data. Various researches have shown four key enablers for smart city success - Government leadership, suitable technology infrastructure, solid public-private partnerships and engaged citizens. It is around these enabling factors that telecoms companies can play a vital role in assisting governments to deliver on the smart city vision." The Emtel Knowledge Series goes in compliance with Emtel's 25th anniversary celebrations throughout the year and the master of ceremony, Kim Andersen, mentioned that there will be more upcoming events on a quarterly base. As a representative of the Mauritius Software Craftsmanship Community (MSCC) there was absolutely no hesitation to join in again. Following my visit to the first Emtel Knowledge Series workshop back in February this year, it was great to have another opportunity to meet and exchange with technology experts. But quite frankly what is it with those buzz words... As far as I remember and how it was mentioned "Cyber Island" is an old initiative from around 2005/2006 which has been refreshed in 2010. It implies the empowerment of Information & Communication Technologies (ICT) as an essential factor of growth by the government here in Mauritius. Actually, the first promotional period of Cyber Island brought me here but that's another story. The venue and its own problems Like last time the event was organised and held at the Conference Hall at Cyber Tower I in Ebene. As I've been working there for some years, I know about the frustrating situation of finding a proper parking. So, does Smart Island include better solutions for the search of parking spaces? Maybe, let's see whether I will be able to answer that question at the end of the article. Anyway, after circling around the tower almost two times, I finally got a decent space to put the car, without risking to get a ticket or damage actually. International speakers and their experience Once again, Emtel did a great job to get international expertise onto the stage to share their experience and vision on this kind of embarkment. Personally, I really appreciated the fact they were speakers of global reach and could provide own-experience knowledge. Johan Gott spoke about the fundamental change that the Swedish government ignited in order to move their society and workers' environment away from heavy industry towards a knowledge-based approach. Additionally, we spoke about the effort and transformation of New York City into a greener and more efficient Smart City. Given modern technology he also advised that any kind of available Big Data should be opened to the general public - this openness would provide a playground for anyone to garner new ideas and most probably solid solutions of which no one else thought about before. Emtel Knowledge Series on moving from Cyber Island to Smart Mauritus Later during the afternoon that exact statement regarding openness to and transparency of government-owned Big Data has been emphasised again by the Danish speaker Kim Andersen and his former colleague Mika Jantunen from Finland. Mika continued to underline the important role of the government to provide a solid foundation for a knowledge-based society and mentioned that Finnish citizens have a constitutional right to broadband connectivity. Next to free higher (tertiary) education Finland already produced a good number of innovations, among them are: First country to grant voting rights to women Free higher education Constitutional right to broadband connectivity Nokia Linux Angry Birds Sauna and others...  General access to internet via broadband and/or mobile connectivity is surely a key factor towards Smart Cities, or better said Smart Mauritius given the area dimensions and size of population. CTO Paul Valette gave the audience a brief overview of the essential role that Emtel will have to move Mauritius forward towards a knowledge-based and innovation-driven environment for its citizen. What I have seen looks really promising and with recently published information that Mauritians have 127% of mobile capacity - meaning more than 1 mobile, smartphone or tablet per person - it will be crucial to have the right infrastructure for these connected devices. How would it be possible to achieve a knowledge-based society? YouTube to the rescue!Seriously, gaining more knowledge will require to have fast access to educational course material as explained by Dr Kaviraj Sukon, General Director of the Open University of Mauritius. According to him a good number of high-profile universities in the world have opened their course libraries to the general public, among them EDX, Coursera and Open University. Nowadays, you're actually able and enabled to learn for and earn a BSc or even MSc certification on your own pace - no need to attend classed on campus. It was really impressive to see the number of available hours - more than enough for a life-long learning experience! {loadposition content_adsense} Networking in the name of MSCC As briefly mentioned above I was about to combine two approaches for this workshop. Of course, getting latest information and updates on Emtel services available, especially for my business here on the west coast of the island, but also to meet and greet new people for the MSCC. And I think it was very positive on both sides. Let me quickly describe some of the key aspects that happened during the day: Met with Arnaud Meslier and Kellie, both Microsoft to swap latest information on IT events. Hereby, I got an invite to Microsoft Windows Phone 8.1 Dev Camp. Got in touch with Arvin Lockee, Emtel to check our options to meet with the data team, and seizing the opportunity to have a visiting tour at the Emtel Data Centre. Had a great chat with Avinash Meetoo, Knowledge 7, Kim Andersen and Mika Jantunen about the situation of teaching and learning in general and specifically in the private sector here in Mauritius. Additionally, a number of various other interesting chats... Once again, I'm catching up on a couple of business cards in order to provide more background information about the MSCC, and to create a better awareness of MSCC within the local IT businesses. There is more to come soon!  Resume of the day The number of attendees during this event has been doubled or even tripled this time. The whole organisation has been improved massively and the combination of presentation and summarizing panel discussions was better than during the previous workshop back in February. Overall, once again a well-organised workshop and I'm already looking forward to join the next workshop in Q3. Update End of July we finally managed to visit the Emtel Data Centre in Arsenal. It was an interesting opportunity for some of our MSCC members.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Let your Signature Experience drive IT-decision making

    - by Tania Le Voi
    Today’s CIO job description:  ‘’Align IT infrastructure and solutions with business goals and objectives ; AND while doing so reduce costs; BUT ALSO, be innovative, ensure the architectures are adaptable and agile as we need to act today on the changes that we may request tomorrow.”   Sound like an unachievable request? The fact is, reality dictates that CIO’s are put under this type of pressure to deliver more with less. In a past career phase I spent a few years as an IT Relationship Manager for a large Insurance company. This is a role that we see all too infrequently in many of our customers, and it’s a shame.  The purpose of this role was to build a bridge, a relationship between IT and the business. Key to achieving that goal was to ensure the same language was being spoken and more importantly that objectives were commonly understood - hence service and projects were delivered to time, to budget and actually solved the business problems. In reality IT and the business are already married, but the relationship is most often defined as ‘supplier’ of IT rather than a ‘trusted partner’. To deliver business value they need to understand how to work together effectively to attain this next level of partnership. The Business cannot compete if they do not get a new product to market ahead of the competition, or for example act in a timely manner to address a new industry problem such as a legislative change. An even better example is when the Application or Service fails and the Business takes a hit by bad publicity, being trending topics on social media and losing direct revenue from online channels. For this reason alone Business and IT need the alignment of their priorities and deliverables now more than ever! Take a look at Forrester’s recent study that found ‘many IT respondents considering themselves to be trusted partners of the business but their efforts are impaired by the inadequacy of tools and organizations’.  IT Meet the Business; Business Meet IT So what is going on? We talk about aligning the business with IT but the reality is it’s difficult to do. Like any relationship each side has different goals and needs and language can be a barrier; business vs. technology jargon! What if we could translate the needs of both sides into actionable information, backed by data both sides understand, presented in a meaningful way?  Well now we can with the Business-Driven Application Management capabilities in Oracle Enterprise Manager 12cR2! Enterprise Manager’s Business-Driven Application Management capabilities provide the information that IT needs to understand the impact of its decisions on business criteria.  No longer does IT need to be focused solely on speeds and feeds, performance and throughput – now IT can understand IT’s impact on business KPIs like inventory turns, order-to-cash cycle, pipeline-to-forecast, and similar.  Similarly, now the line of business can understand which IT services are most critical for the KPIs they care about. There are a good deal of resources on Oracle Technology Network that describe the functionality of these products, so I won’t’ rehash them here.  What I want to talk about is what you do with these products. What’s next after we meet? Where do you start? Step 1:  Identify the Signature Experience. This is THE business process (or set of processes) that is core to the business, the one that drives the economic engine, the process that a customer recognises the company brand for, reputation, the customer experience, the process that a CEO would state as his number one priority. The crème de la crème of your business! Once you have nailed this it gets easy as Enterprise Manager 12c makes it easy. Step 2:  Map the Signature Experience to underlying IT.  Taking the signature experience, map out the touch points of the components that play a part in ensuring this business transaction is successful end to end, think of it like mapping out a critical path; the applications, middleware, databases and hardware. Use the wealth of Enterprise Manager features such as Systems, Services, Business Application Targets and Business Transaction Management (BTM) to assist you. Adding Real User Experience Insight (RUEI) into the mix will make the end to end customer satisfaction story transparent. Work with the business and define meaningful key performance indicators (KPI’s) and thresholds to enable you to report and action upon. Step 3:  Observe the data over time.  You now have meaningful insight into every step enabling your signature experience and you understand the implication of that experience on your underlying IT.  Watch if for a few months, see what happens and reconvene with your business stakeholders and set clear and measurable targets which can re-define service levels.  Step 4:  Change the information about which you and the business communicate.  It’s amazing what happens when you and the business speak the same language.  You’ll be able to make more informed business and IT decisions. From here IT can identify where/how budget is spent whether on the level of support, performance, capacity, HA, DR, certification etc. IT SLA’s no longer need be focused on metrics such as %availability but structured around business process requirements. The power of this way of thinking doesn’t end here. IT staff get to see and understand how their own role contributes to the business making them accountable for the business service. Take a step further and appraise your staff on the business competencies that are linked to the service availability. For the business, the language barrier is removed by producing targeted reports on the signature experience core to the business and therefore key to the CEO. Chargeback or show back becomes easier to justify as the ‘cost of day per outage’ can be more easily calculated; the business will be able to translate the cost to the business to the cost/value of the underlying IT that supports it. Used this way, Oracle Enterprise Manager 12c is a key enabler to a harmonious relationship between the end customer the business and IT to deliver ultimate service and satisfaction. Just engage with the business upfront, make the signature experience visible and let Enterprise Manager 12c do the rest. In the next blog entry we will cover some of the Enterprise Manager features mentioned to enable you to implement this new way of working.  

    Read the article

  • How to free up space on RHEL6 /boot safely?

    - by ams
    I am trying to do yum update on RHEL 6 box and I am getting this error message Transaction Check Error: installing package kernel-2.6.32-279.9.1.el6.x86_64 needs 10MB on the /boot filesystem installing package grub-1:0.97-77.el6.x86_64 needs 10MB on the /boot filesystem Error Summary ------------- Disk Requirements: At least 10MB more space needed on the /boot filesystem. My /boot has the following # ls -lah /boot total 74M dr-xr-xr-x. 5 root root 2.0K Jun 10 08:05 . drwxr-xr-x. 23 root root 4.0K Aug 27 03:08 .. -rw-r--r-- 1 root root 99K Apr 26 12:53 config-2.6.32-220.17.1.el6.x86_64 -rw-r--r-- 1 root root 99K Feb 10 2012 config-2.6.32-220.7.1.el6.x86_64 -rw-r--r--. 1 root root 99K Nov 9 2011 config-2.6.32-220.el6.x86_64 drwxr-xr-x. 3 root root 1.0K Mar 29 2012 efi drwxr-xr-x. 2 root root 1.0K Jun 10 07:53 grub -rw-r--r-- 1 root root 15M Jun 10 07:53 initramfs-2.6.32-220.17.1.el6.x86_64.img -rw-r--r-- 1 root root 15M Mar 29 2012 initramfs-2.6.32-220.7.1.el6.x86_64.img -rw-r--r--. 1 root root 15M Mar 29 2012 initramfs-2.6.32-220.el6.x86_64.img -rw------- 1 root root 3.4M Jun 10 08:06 initrd-2.6.32-220.17.1.el6.x86_64kdump.img -rw------- 1 root root 3.5M Jun 10 07:53 initrd-2.6.32-220.7.1.el6.x86_64kdump.img -rw------- 1 root root 3.4M Mar 29 2012 initrd-2.6.32-220.el6.x86_64kdump.img drwx------. 2 root root 12K Mar 29 2012 lost+found -rw-r--r-- 1 root root 168K Apr 26 12:55 symvers-2.6.32-220.17.1.el6.x86_64.gz -rw-r--r-- 1 root root 168K Feb 10 2012 symvers-2.6.32-220.7.1.el6.x86_64.gz -rw-r--r--. 1 root root 168K Nov 9 2011 symvers-2.6.32-220.el6.x86_64.gz -rw-r--r-- 1 root root 2.3M Apr 26 12:53 System.map-2.6.32-220.17.1.el6.x86_64 -rw-r--r-- 1 root root 2.3M Feb 10 2012 System.map-2.6.32-220.7.1.el6.x86_64 -rw-r--r--. 1 root root 2.3M Nov 9 2011 System.map-2.6.32-220.el6.x86_64 -rwxr-xr-x 1 root root 3.8M Apr 26 12:53 vmlinuz-2.6.32-220.17.1.el6.x86_64 -rw-r--r-- 1 root root 171 Apr 26 12:53 .vmlinuz-2.6.32-220.17.1.el6.x86_64.hmac -rwxr-xr-x 1 root root 3.8M Feb 10 2012 vmlinuz-2.6.32-220.7.1.el6.x86_64 -rw-r--r-- 1 root root 170 Feb 10 2012 .vmlinuz-2.6.32-220.7.1.el6.x86_64.hmac -rwxr-xr-x. 1 root root 3.8M Nov 9 2011 vmlinuz-2.6.32-220.el6.x86_64 -rw-r--r--. 1 root root 166 Nov 9 2011 .vmlinuz-2.6.32-220.el6.x86_64.hmac here is the disk usage on boot # du -h 13K ./lost+found 282K ./grub 247K ./efi/EFI/redhat 249K ./efi/EFI 251K ./efi 75M . Problem is that when I got this severer at my ISP I used their default image for RHEL 6 which only allocates 100MB for /boot clearly this is not enough. How can I get around this problem, is it safe to delete any of the above files some of them seem to be on the disk more than once? Is there some way of expand /boot without re-imaging the machine?

    Read the article

  • Restructuring a large Chrome Extension/WebApp

    - by A.M.K
    I have a very complex Chrome Extension that has gotten too large to maintain in its current format. I'd like to restructure it, but I'm 15 and this is the first webapp or extension of it's type I've built so I have no idea how to do it. TL;DR: I have a large/complex webapp I'd like to restructure and I don't know how to do it. Should I follow my current restructure plan (below)? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? While it isn't relevant to the question, the actual code is on Github and the extension is on the webstore. The basic structure is as follows: index.html <html> <head> <link href="css/style.css" rel="stylesheet" /> <!-- This holds the main app styles --> <link href="css/widgets.css" rel="stylesheet" /> <!-- And this one holds widget styles --> </head> <body class="unloaded"> <!-- Low-level base elements are "hardcoded" here, the unloaded class is used for transitions and is removed on load. i.e: --> <div class="tab-container" tabindex="-1"> <!-- Tab nav --> </div> <!-- Templates for all parts of the application and widgets are stored as elements here. I plan on changing these to <script> elements during the restructure since <template>'s need valid HTML. --> <template id="template.toolbar"> <!-- Template content --> </template> <!-- Templates end --> <!-- Plugins --> <script type="text/javascript" src="js/plugins.js"></script> <!-- This contains the code for all widgets, I plan on moving this online and downloading as necessary soon. --> <script type="text/javascript" src="js/widgets.js"></script> <!-- This contains the main application JS. --> <script type="text/javascript" src="js/script.js"></script> </body> </html> widgets.js (initLog || (window.initLog = [])).push([new Date().getTime(), "A log is kept during page load so performance can be analyzed and errors pinpointed"]); // Widgets are stored in an object and extended (with jQuery, but I'll probably switch to underscore if using Backbone) as necessary var Widgets = { 1: { // Widget ID, this is set here so widgets can be retreived by ID id: 1, // Widget ID again, this is used after the widget object is duplicated and detached size: 3, // Default size, medium in this case order: 1, // Order shown in "store" name: "Weather", // Widget name interval: 300000, // Refresh interval nicename: "weather", // HTML and JS safe widget name sizes: ["tiny", "small", "medium"], // Available widget sizes desc: "Short widget description", settings: [ { // Widget setting specifications stored as an array of objects. These are used to dynamically generate widget setting popups. type: "list", nicename: "location", label: "Location(s)", placeholder: "Enter a location and press Enter" } ], config: { // Widget settings as stored in the tabs object (see script.js for storage information) size: "medium", location: ["San Francisco, CA"] }, data: {}, // Cached widget data stored locally, this lets it work offline customFunc: function(cb) {}, // Widgets can optionally define custom functions in any part of their object refresh: function() {}, // This fetches data from the web and caches it locally in data, then calls render. It gets called after the page is loaded for faster loads render: function() {} // This renders the widget only using information from data, it's called on page load. } }; script.js (initLog || (window.initLog = [])).push([new Date().getTime(), "These are also at the end of every file"]); // Plugins, extends and globals go here. i.e. Number.prototype.pad = .... var iChrome = function(refresh) { // The main iChrome init, called with refresh when refreshing to not re-run libs iChrome.Status.log("Starting page generation"); // From now on iChrome.Status.log is defined, it's used in place of the initLog iChrome.CSS(); // Dynamically generate CSS based on settings iChrome.Tabs(); // This takes the tabs stored in the storage (see fetching below) and renders all columns and widgets as necessary iChrome.Status.log("Tabs rendered"); // These will be omitted further along in this excerpt, but they're used everywhere // Checks for justInstalled => show getting started are run here /* The main init runs the bare minimum required to display the page, this sets all non-visible or instantly need things (such as widget dragging) on a timeout */ iChrome.deferredTimeout = setTimeout(function() { iChrome.deferred(refresh); // Pass refresh along, see above }, 200); }; iChrome.deferred = function(refresh) {}; // This calls modules one after the next in the appropriate order to finish rendering the page iChrome.Search = function() {}; // Modules have a base init function and are camel-cased and capitalized iChrome.Search.submit = function(val) {}; // Methods within modules are camel-cased and not capitalized /* Extension storage is async and fetched at the beginning of plugins.js, it's then stored in a variable that iChrome.Storage processes. The fetcher checks to see if processStorage is defined, if it is it gets called, otherwise settings are left in iChromeConfig */ var processStorage = function() { iChrome.Storage(function() { iChrome.Templates(); // Templates are read from their elements and held in a cache iChrome(); // Init is called }); }; if (typeof iChromeConfig == "object") { processStorage(); } Objectives of the restructure Memory usage: Chrome apparently has a memory leak in extensions, they're trying to fix it but memory still keeps on getting increased every time the page is loaded. The app also uses a lot on its own. Code readability: At this point I can't follow what's being called in the code. While rewriting the code I plan on properly commenting everything. Module interdependence: Right now modules call each other a lot, AFAIK that's not good at all since any change you make to one module could affect countless others. Fault tolerance: There's very little fault tolerance or error handling right now. If a widget is causing the rest of the page to stop rendering the user should at least be able to remove it. Speed is currently not an issue and I'd like to keep it that way. How I think I should do it The restructure should be done using Backbone.js and events that call modules (i.e. on storage.loaded = init). Modules should each go in their own file, I'm thinking there should be a set of core files that all modules can rely on and call directly and everything else should be event based. Widget structure should be kept largely the same, but maybe they should also be split into their own files. AFAIK you can't load all templates in a folder, therefore they need to stay inline. Grunt should be used to merge all modules, plugins and widgets into one file. Templates should also all be precompiled. Question: Should I follow my current restructure plan? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? Do applications written with Backbone tend to be more intensive (memory and speed) than ones written in Vanilla JS? Also, can I expect to improve this with a proper restructure or is my current code about as good as can be expected?

    Read the article

  • How to use iptables to forward all data from an IP to a Virtual Machine

    - by jro
    OK, in an attempt to get some response, a TL;DR version. I know that the following command: iptables -A PREROUTING -t nat -i eth0 --dport 80 --source 1.1.1.1 -j REDIRECT --to-port 8080 ... will redirect all traffic from port 80 to port 8080. The problem is that I have to do this for every port that is to be redirected. To be future-proof, I want all ports for an IP to be redirected to a different (internal) IP, so that if one might decide to enable SSH, they can directly connect without worrying about iptables. What is needed to reliable forward all traffic from an external IP, to an internal IP, and vice versa? Extended version I've scoured the internet for this, but I never got a solid answer. What I have is one physical server (HOST), with several virtual machines (VM) that need traffic redirected to them. Just getting it to work with a single machine is enough for now. The VM's run under VirtualBox, and are set to use a host-only adapter (vboxnet0). Everything seems to work, but it is greatly lagging. Both the host (CentOS 5.6) and the guest (Ubuntu 10.04) machine are running Linux. What I did was the following: Configure the VM to have a static IP in the network of the vboxnet0 adapter. Add an IP alias to the host, registering to the dedicated (outside) IP. Setup iptables to allow traffic to come through (via sysctl). Configure iptables to DNAT and SNAT data from a given IP address to the internal address. iptables commands: sudo iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT sudo iptables -A POSTROUTING -t nat -j MASQUERADE iptables -t nat -I PREROUTING -d $OUT_IP -I eth0 -j DNAT --to-destination $IN_IP iptables -t nat -I POSTROUTING -s $IN_IP -o eth0 -j SNAT --to-source $OUT_IP Now the site works, but is really, really slow. I'm hoping I missed something simple, but I'm out of ideas for now. Some background info: before this, the site was working with basic port forwarding. E.g. port 80 was mapped to port 8080 using iptables. In VirtualBox (having the network adapter configured as NAT), a port forwarding the other way around made things work beautifully. The problem was twofold: first, multiple ports needed to be forwarded (for admin interfaces, https, ssh, etc). Second, it only allowed one IP address to use port 80. To resolve things, multiple external IP addresses are used for different (sub)domains. Likewise, the "VirtualBox" network will contain the virtual machines: DNS Ext. IP Adapter VM "VirtalBox" IP ------------------------------------------------------------------ a.example.com 1.1.1.1 eth0:1 vm_guest_1 192.168.56.1 b.example.com 2.2.2.2 eth0:2 vm_guest_2 192.168.56.2 c.example.com 3.3.3.3 eth0:3 vm_guest_3 192.168.56.3 And so on. Put simply, the goal is to channel all traffic from a.example.com to vm_guest_1 (of put differently, from 1.1.1.1 to 192.168.56.1). And achieve this with an acceptable speed :).

    Read the article

  • CHAT ROOMs 7 by 6

    - by user2939942
    I am looking for chatroom on one page with 7 loggedin users and 6+rows for say 42 users.these users will keep on adding wthnew users.Need urgent help.A PRETTY UNUSUAL Q FOR MOST OF U.What is MORE REQ new features: Usernames are unique to users currently chatting You can see a "currently chatting" user list There are multiple rooms for chatting <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Simpla Admin</title> <link rel="stylesheet" href="resources/css/reset.css" type="text/css" media="screen" /> <link rel="stylesheet" href="resources/css/style.css" type="text/css" media="screen" /> <link rel="stylesheet" href="resources/css/invalid.css" type="text/css" media="screen" /> <script type="text/javascript" src="resources/scripts/jquery-1.3.2.min.js"></script> <script type="text/javascript" src="resources/scripts/simpla.jquery.configuration.js"></script> <script type="text/javascript" src="resources/scripts/facebox.js"></script> <script type="text/javascript" src="resources/scripts/jquery.wysiwyg.js"></script> <script type="text/javascript" src="resources/scripts/jquery.datePicker.js"></script> <script type="text/javascript" src="resources/scripts/jquery.date.js"></script> <script language="JavaScript" type="text/javascript" src="suggest3.js"></script><script language="javascript"> function popitappup4() { var aid=document.a.cid.value; var url="followup.php?id="+aid; alert(url); newwindow=window.open(url,'name','height=480,width=480, scrollbars=yes'); if (window.focus) {newwindow.focus()} return false; } </script> <script type="text/javascript" src="highslide-with-html.js"></script> <link rel="stylesheet" type="text/css" href="highslide.css" /> <script type="text/javascript"> hs.graphicsDir = 'graphics/'; hs.outlineType = 'rounded-white'; hs.wrapperClassName = 'draggable-header'; </script> <link type="text/css" rel="stylesheet" media="all" href="css/chat.css" /> <link type="text/css" rel="stylesheet" media="all" href="css/screen.css" /> </head> <body onload="fnew()"><div id="body-wrapper"> <!-- Wrapper for the radial gradient background --> <div id="sidebar"> <link type="text/css" rel="stylesheet" media="all" href="css/chat.css" /> <link type="text/css" rel="stylesheet" media="all" href="css/screen.css" /> <script type="text/javascript" src="js/jquery.js"></script> <script type="text/javascript" src="js/chat.js"></script> <script type="text/javascript"> function fnew() { document.getElementById("psearch").focus(); } </script> <div id="sidebar-wrapper"> <!-- Sidebar with logo and menu --> <h1 id="sidebar-title"><a href="#"></a></h1> <!-- Logo (221px wide) --> <a href="#"><img id="logo" src="resources/images/logo.png" alt="Simpla Admin logo" /></a> <!-- Sidebar Profile links --> <form name="frm" action="opd_view1.php"> <table width="240" border="0" cellspacing="0" cellpadding="0"> <tr> <td width="210"><div align="right" style="font-size:22px; color:#FFFFFF"><b>OPD Search</b></div></td> <td width="30"><div align="right"></div></td> </tr> <tr> <td align="right">&nbsp;</td> <td align="right">&nbsp;</td> </tr> <tr> <td align="right"><div align="right"> <input type="text" name="psearch" id="psearch" class="text-input" style="width:45mm;" /> </div></td> <td align="right"><div align="right"></div></td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td><div align="right"></div></td> <td><div align="right"></div></td> </tr> </table> </form> <div id="profile-links"> <a href="welcome.php" title="Sign Out" style="font-size:16px" ><b> </b></a> <br /> <a href="sample.php" title="Chat">Chat</a> </div></div> <!-- End #sidebar --> <div id="main-content"> <!-- Main Content Section with everything --> <noscript> <!-- Show a notification if the user has disabled javascript --> </noscript> <div style="width:100%; height: 600px; overflow-x: scroll; scrollbar-arrow-color: blue; scrollbar-face-color: #e7e7e7; scrollbar-3dlight-color: #a0a0a0; scrollbar-darkshadow-color: #888888; background-color:#FFFFFF "> <ul class="shortcut-buttons-set"> <!-- Page Head --> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drabhinit')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drabhinit</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drvarun')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drvarun</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('sameer')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>sameer</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drchetan')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drchetan</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('neema')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>neema</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drpriya')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drpriya</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drchhavi')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drchhavi</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drsanjay')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drsanjay</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('ruchi')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>ruchi</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drarchana')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drarchana</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drshraddha')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drshraddha</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('sunita')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>sunita</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('reshma')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>reshma</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('riya')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>riya</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drritesh')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drritesh</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('rachana')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>rachana</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('sunita')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>sunita</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('kavye')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>kavye</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('paridhi')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>paridhi</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('paridhi')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>paridhi</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drsonika')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drsonika</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('anny')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>anny</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('nitansh')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>nitansh</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drekta')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drekta</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drritesh')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drritesh</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('neeraj')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>neeraj</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('neeraj')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>neeraj</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drneha')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drneha</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('kirti')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>kirti</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drratna')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drratna</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drratana')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drratana</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drnoopur')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drnoopur</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('admin k')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>admin k</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('web')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>web</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drarti')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drarti</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drsaqib')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drsaqib</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('neelesh')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>neelesh</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('pooja')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>pooja</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drneha')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drneha</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drnupur')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drnupur</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('isha')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>isha</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('isha')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>isha</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drnamrata')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drnamrata</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('ashish')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>ashish</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('ambrish')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>ambrish</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drrashmi')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drrashmi</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drsapna')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drsapna</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('manisha')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>manisha</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('Isha')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>Isha</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drrashmi')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drrashmi</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('Dr Meghna')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>Dr Meghna</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('akanksha')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>akanksha</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drashish')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drashish</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drpriya')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drpriya</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drnitya')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drnitya</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drmanoj')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drmanoj</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('sonali')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>sonali</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drkhushbu')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drkhushbu</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drpriyanka')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drpriyanka</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drabhishek')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drabhishek</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drpoonam')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drpoonam</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drprachi')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drprachi</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drpeenal')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drpeenal</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('neerajpune')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>neerajpune</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('paridhipune')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>paridhipune</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('faeem')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>faeem</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('rahul')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>rahul</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('DrNeha')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>DrNeha</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drmrigendra')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drmrigendra</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('neetu')" rel="modal" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>neetu</span></a></li> <li> <a class="shortcut-button" href="javascript:void(0)" onClick="javascript:chatWith('drriteshpawar')" rel="modal" style=" background-color:#00FF00" ><span><img src="resources/images/icons/comment_48.png" alt="icon" width="48" height="48" /> <br/>drriteshpawar</span></a></li> </ul> </div> <script type="text/javascript" src="js/jquery.js"></script> <script type="text/javascript" src="js/chat.js"></script> <!-- End .shortcut-buttons-set --> <div class="clear"></div> <div class="clear"></div>

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53  | Next Page >