Search Results

Search found 13104 results on 525 pages for 'malcolm box'.

Page 467/525 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark (TOTD #184)

    - by arungupta
    TOTD #183 explained how to build a WebSocket-driven application using GlassFish 4. This Tip Of The Day (TOTD) will explain how do view/debug on-the-wire messages, or frames as they are called in WebSocket parlance, over this upgraded connection. This blog will use the application built in TOTD #183. First of all, make sure you are using a browser that supports WebSocket. If you recall from TOTD #183 then WebSocket is combination of Protocol and JavaScript API. A browser supporting WebSocket, or not, means they understand your web pages with the WebSocket JavaScript. caniuse.com/websockets provide a current status of WebSocket support in different browsers. Most of the major browsers such as Chrome, Firefox, Safari already support WebSocket for the past few versions. As of this writing, IE still does not support WebSocket however its planned for a future release. Viewing WebSocket farmes require special settings because all the communication happens over an upgraded HTTP connection over a single TCP connection. If you are building your application using Java, then there are two common ways to debug WebSocket messages today. Other language libraries provide different mechanisms to log the messages. Lets get started! Chrome Developer Tools provide information about the initial handshake only. This can be viewed in the Network tab and selecting the endpoint hosting the WebSocket endpoint. You can also click on "WebSockets" on the bottom-right to show only the WebSocket endpoints. Click on "Frames" in the right panel to view the actual frames being exchanged between the client and server. The frames are not refreshed when new messages are sent or received. You need to refresh the panel by clicking on the endpoint again. To see more detailed information about the WebSocket frames, you need to type "chrome://net-internals" in a new tab. Click on "Sockets" in the left navigation bar and then on "View live sockets" to see the page. Select the box with the address to your WebSocket endpoint and see some basic information about connection and bytes exchanged between the client and the endpoint. Clicking on the blue text "source dependency ..." shows more details about the handshake. If you are interested in viewing the exact payload of WebSocket messages then you need a network sniffer. These tools are used to snoop network traffic and provide a lot more details about the raw messages exchanged over the network. However because they provide lot more information so they need to be configured in order to view the relevant information. Wireshark (nee Ethereal) is a pretty standard tool for sniffing network traffic and will be used here. For this blog purpose, we'll assume that the WebSocket endpoint is hosted on the local machine. These tools do allow to sniff traffic across the network though. Wireshark is quite a comprehensive tool and we'll capture traffic on the loopback address. Start wireshark, select "loopback" and click on "Start". By default, all traffic information on the loopback address is displayed. That includes tons of TCP protocol messages, applications running on your local machines (like GlassFish or Dropbox on mine), and many others. Specify "http" as the filter in the top-left. Invoke the application built in TOTD #183 and click on "Say Hello" button once. The output in wireshark looks like Here is a description of the messages exchanged: Message #4: Initial HTTP request of the JSP page Message #6: Response returning the JSP page Message #16: HTTP Upgrade request Message #18: Upgrade request accepted Message #20: Request favicon Message #22: Responding with favicon not found Message #24: Browser making a WebSocket request to the endpoint Message #26: WebSocket endpoint responding back You can also use Fiddler to debug your WebSocket messages. How are you viewing your WebSocket messages ? Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish Subsequent blogs will discuss the following topics (not necessary in that order) ... Binary data as payload Custom payloads using encoder/decoder Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Create a Search Filter List in Google Chrome

    - by Asian Angel
    Are you tired of unwanted and/or non-relevant results cluttering up the search results at Bing, Yahoo, and Google? With the Search Filter extension for Chrome you can easily remove the unwanted “chaff” from your search results. Note: The extension only works on Bing, Yahoo, and Google at this time. Before For our example we conducted a search for “anime wallpapers” at Yahoo Singapore, Bing Singapore, and Google. In each set of results we decided to focus on results that would display either a yellow or red warning color from WOT. You can see our targeted result for Yahoo Singapore… The one for Bing Singapore… And the targeted result from Google. Search Filter in Action As soon as you install the extension you should take a quick look at the “Options”. At first the “Filters List Area” will be empty but will not remain so for long as you create your own filter list. The second part may or may not be of interest to you…the ability to opt into the filter service. If you opt in your filter list will be connected to your “Google Account” and will be available on any of your Chrome installs with the extension installed (and set to “Opt In”). Keep in mind that if you choose this option the filter list that you create will be aggregated anonymously and have a GUID number attached to it. After installing the extension we refreshed each of our three search pages…notice the small red circle button beside each search result link. Clicking on the red circle button will cause the entire browser window area to “shade out” temporarily while you decide between adding that website to the filter list or cancelling the action. If you add a website to the filter list that result will immediately disappear from the search results list without refreshing the webpage. Looks like we have another website at the bottom that we could add to the filter list… Click, click, click! After adding one website from each of the three search services you can see that our filter list has gotten off to a nice start. If for some reason you accidentally add a website to the list or change your mind about a website simply click on the red circle button to remove that particular listing. Conclusion If you are looking for an easy way to create a search filter list then this is definitely an extension that is worth taking the time to look at. Links Download the Search Filter extension (Google Chrome Extensions) Visit the Search Filter Hub Website to View Lists of Filtered Sites Similar Articles Productive Geek Tips How to Make Google Chrome Your Default BrowserGeek’s Spam Filter – Updated to Version 0.2Access Wolfram Alpha Search in Google ChromeGain Access to a Search Box in Google ChromeGeek’s Spam Filter – Updated to Version 0.3 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Scan News With NY Times Article Skimmer SpeedyFox Claims to Speed up your Firefox Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats

    Read the article

  • Plymouth and GRUB do not show at all

    - by WarriorIng64
    I am using Ubuntu 11.04 64-bit as my only OS on my desktop computer, which used to only run Ubuntu 10.04 LTS until I had the time to upgrade it with a fresh install. It uses integrated NVIDIA graphics (listed as a GeForce 6150SE nForce 430 by the NVIDIA X Server Settings utility) with the current proprietary driver as provided by the Additional Drivers utility, and has a VGA connection to a 1680x1050 Acer monitor. I used to get the (ugly-looking version of) Plymouth graphical boot screen while under 10.04. It didn't look that great, but I was fine with it. Now, it doesn't show on 11.04 at all during boot (I just get an error message in a moving gray box from the monitor saying "Input Not Supported"), and only rarely it will show on shutdown, all garbled up. I could not get GRUB to show during boot while holding down Shift, either (same error message), but pressing Enter while it should be up starts the system normally. A picture of the error message I was getting: Once fully booted, the system still shows the login screen and desktop just fine. Any information on how to troubleshoot this would be appreciated. If there's any hardware-specific stuff I forgot to include here, let me know the relevant commands to run in a comment below. Things that I've tried: Running plymouth in a framebuffer: no effect Booting with nomodeset as my grub boot: option no effect Booting with nomodeset and plymouth in a framebuffer: no effect other than Plymouth showing during shutdown only Following the Softpedia instructions for fixing Plymouth's resolution: Problem mostly solved, except logo does not show in Plymouth during boot, and both grub and Plymouth are slightly off-center #4 above, but with nomodeset removed as a grub boot option: same effect as #4 #5 above, but with vt.handoff=7 added as a grub boot option: same effect as #4 I have added the current contents of /etc/default/grub as requested in the comments: # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash video=uvesafb:mode_option=1280x1024-24,mtrr=3,scroll=ywrap" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' GRUB_GFXMODE=1280x1024 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" CURRENT STATUS: I forgot to uncomment one line as per "things that I've tried" #4, so I took care of that. I can now see GRUB during startup when I hold Shift and a normal-looking Plymouth during shutdown...but Plymouth during boot is now just a solid purple screen. In each case, it's displayed a little off-center to the left, with a thin black bar running down the right side of the monitor. The error pictured above no longer shows. I'd say this problem is about 2/3 solved now. UPDATE: After Natty started freezing up on me, I decided to dual-boot with Oneiric, which unfortunately shows the same problems. Rather than trying all these workarounds though, I decided to do what I should have done from the start and file a pair of bug reports. LAST UPDATE: Bug 850908 has been confirmed as a legitimate nouveaufb bug. I have overwritten my 11.04 partition with 12.04 LTS, and I can confirm at this time that the issue is present there, as well. I will now flag this question to be closed, yet I hope it was helpful for anyone who experienced similar issues; if you are still having the same problem as me, please go there and mark yourself as affected. Thanks!

    Read the article

  • New Release Overview Part 2

    - by brian.harrison
    To continue our discussion of the next release of WCI, lets take a look at a few other new features that have been developed and tested. Password Management With customer implementations starting to go more external, we were finding that these customers wanted to use the native users within the portal because the customer did not want to provide an LDAP server that is externally facing. However, the portal does not provide anything close to the same level of password policy that a standard LDAP environment would provide. With that being the case, we made the decision to provide the same kind of password policies directly within WCI that a standard LDAP environment would have. Password Expiration - In how many days will a password expire which will force the user to change their password? Also, in how many days prior to expiration with the user be notified that their password is about the expire? Password Rotation - How many of your previous passwords will you not be able to use when changing your password? Password Policies - What are the requirements for the password that is being created by the user? Number of Characters Numbers Required Symbols Required Capitalization Required Easily Configurable - Configuration is handled through the Portal Settings utility within Administration. All options are available on the main page of the utility. In addition to the configuration options that were mention above, there has also been a complete rewrite of the Change Password screen to provide better information to the user when they are changing their password. The Change Password will now provide a red light/green light listing of all the policies the user must meet for the changed password to be successful. As the user is typing the password, the red lights will change to green lights as the policies as met. In addition, text will show next to the password text box stating what policy has not been met yet. NOTE: The password policy functionality is not held within the User Editor page within Administration. We did not want to remove the option for Administrators to change a user's password on the fly in the case of a password reset situation. Miscellaneous Features In addition to the Password Management feature, there are a few other features that are related to WCI that should be mentioned. Consolidated Installer - Instead of having up to 12 or 13 different installers, one for each of the main products and separate services, we are going to only provide two installers. One that will be used for Collaboration and its respective images. The second will contain WCI and all of the relevant services required for a WCI architecture as well as the IDK, .NET App Accelerator, SharePoint Console as well as all Content Web Services and Identity Services. Updated Documentation - Most of us are aware that the documentation hasn't been properly kept up to date with the last couple of releases. We are doing everything that we can to remedy this with the next release by consolidating and reviewing everything that is available. We are making sure to fill in the gaps that are already there, add in all documentation for the functionality as well as clearing anything that is no longer valid based on the newly released version. I hope that you enjoyed reading through this new release information. Next time we will start to talk about the new functionality that will be available within the next release of Collaboration. If there is anything in particular that you would like to get more detail about, then please don't hesitate to send me a comment.

    Read the article

  • Clone a Hard Drive Using an Ubuntu Live CD

    - by Trevor Bekolay
    Whether you’re setting up multiple computers or doing a full backup, cloning hard drives is a common maintenance task. Don’t bother burning a new boot CD or paying for new software – you can do it easily with your Ubuntu Live CD. Not only can you do this with your Ubuntu Live CD, you can do it right out of the box – no additional software needed! The program we’ll use is called dd, and it’s included with pretty much all Linux distributions. dd is a utility used to do low-level copying – rather than working with files, it works directly on the raw data on a storage device. Note: dd gets a bad rap, because like many other Linux utilities, if misused it can be very destructive. If you’re not sure what you’re doing, you can easily wipe out an entire hard drive, in an unrecoverable way. Of course, the flip side of that is that dd is extremely powerful, and can do very complex tasks with little user effort. If you’re careful, and follow these instructions closely, you can clone your hard drive with one command. We’re going to take a small hard drive that we’ve been using and copy it to a new hard drive, which hasn’t been formatted yet. To make sure that we’re working with the right drives, we’ll open up a terminal (Applications > Accessories > Terminal) and enter in the following command sudo fdisk –l We have two small drives, /dev/sda, which has two partitions, and /dev/sdc, which is completely unformatted. We want to copy the data from /dev/sda to /dev/sdc. Note: while you can copy a smaller drive to a larger one, you can’t copy a larger drive to a smaller one with the method described below. Now the fun part: using dd. The invocation we’ll use is: sudo dd if=/dev/sda of=/dev/sdc In this case, we’re telling dd that the input file (“if”) is /dev/sda, and the output file (“of”) is /dev/sdc. If your drives are quite large, this can take some time, but in our case it took just less than a minute. If we do sudo fdisk –l again, we can see that, despite not formatting /dev/sdc at all, it now has the same partitions as /dev/sda.  Additionally, if we mount all of the partitions, we can see that all of the data on /dev/sdc is now the same as on /dev/sda. Note: you may have to restart your computer to be able to mount the newly cloned drive. And that’s it…If you exercise caution and make sure that you’re using the right drives as the input file and output file, dd isn’t anything to be scared of. Unlike other utilities, dd copies absolutely everything from one drive to another – that means that you can even recover files deleted from the original drive in the clone! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDHow to Browse Without a Trace with an Ubuntu Live CDRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu 9.10 USB Flash DriveWipe, Delete, and Securely Destroy Your Hard Drive’s Data the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7

    Read the article

  • Share Files and Folders and Internet between Guest OS and the Host in Hyper-V

    - by Manesh Karunakaran
    For those who are familiar with the VirtualPC, vmWare and VirtualBox environments will be quite irritated to find out that there is no direct way to share files from the Host machine to the Virtualized guest environment. This is a good thing from a CIO perspective because there’s excellent isolation for the virtualized environments this way, but for the developer junkies like us, this is an irritant, especially for those who have nuked their Windows 7 OS and installed Windows Server 2008 R2 for all the the SharePoint friendliness that it offers. Here’s a quick 5 minutes howto on Enabling Shared Folders and Internet Access for the Hyper-V images, for those who are still struggling with this. Step 1: Add a Virtual Network Adapter to your Guest OS For this, shut down the guest machine, go to its settings and add a Virtual Network Adapter as given in the images below     Step 2: Enable Virtual Networking in Hyper-V   Setting this up is very easy. In the Hyper-V Manager, under Actions (right panel), click the Virtual Network Manager. In the Virtual Network Manager in the Create virtual network panel, select Internal and click the Add button.        At this point if you open Control Panel\Network and Internet\Network Connections you will be able to see the new Network Adapter, Now name it to something meaningful other than Network Adapter X. Now you can add this network to each of your virtual machines, but at this point, unless you assign an IP address in each connection, you won't be able to do much.   Step 3: Enable Internet Connection Sharing so that Guest OS’es also can connect to the internet. To enable ICS follow these steps: Click on the network icon in the tray of your host machine and select Network and Sharing Center. From there click Manage network connections. Select the network adapter that you use to access the Internet. Right click it and select Properties. In the properties dialog select the Sharing tab. On this tab check the box that says "Allow other network users..." and then set the Home networking connection to be the network adapter that was created above (now you see why I said to rename it to something useful). Now your virtual machines that have this network connection will automatically get an IP address and will be able to connect to the Internet (provided your internet connection is working). Because each adapter also gets an automatic address you can now share files and folders between your host and your virtual machines which is important since you can't just drag-and-drop files like you can with Virtual PC.   Step 4: Create a Shared Folder in the Host Machine and use it in the Guest machine. Right click on the folder that you want to Share and select ‘Share with\Specific People’ and specify who all can access the share. Open the Guest OS from Hyper V Navigate to Start > Run and type in the Address of the Share (Or Map a Drive to the Share) Bingo! The Share opens!! :)   Now you can share as many files and folders as you want between the host and the guest, and you also have internet access inside the Virtual machines. Hope that helps.   Technorati Tags: Shared folder,Hyper-V,Share Files,Share files and folders between guest and host,Hyper-V Networking,Share Internet Access in Hyper-V,Internet,Files,Shared folders in Hyper-V

    Read the article

  • A debugging experience with "highly compatible" ASP.NET 4.5

    - by Jeff
    I have to admit that I will pretty much upgrade software for no reason other than being on the latest version. I won't do it if it's super expensive (Adobe gets money from me about once every three or four years at best), but particularly with frameworks and stuff generally available as part of my MSDN subscription, I'll be bleeding edge. CoasterBuzz was running on the MVC 4 framework pretty much as soon as they did a "go live" license for it. I didn't really jump in head-first with Windows 8 and Visual Studio 2012, in part because I just wasn't interested in doing the reinstalls for each new version. Turns out there weren't that many revisions anyway. But when the final versions were released a week and a half ago, I jumped in. I saw on one of the Microsoft sites that .Net 4.5 was a "highly compatible in-place update" to the framework. Good enough for me. I was obviously running it by default in Windows 8, and installed it on my production server. I suppose it's "highly compatible," except when it isn't. Three of my sites are running with various flavors of the MVC version of POP Forums. All of them stopped working under ASP.NET 4.5. It was not immediately obvious what the problem might be beyond an exception indicating that there were no repository classes registered with Ninject, which I use for dependency injection in the forums. This was made all the more weird by the fact that it ran fine locally in the dev Web host. My first instinct was to spin up a Windows Server VM on my local box and put the remote debugger on it. (Side note: running multiple VM's on a Retina MacBook Pro with 16 gigs of RAM is pretty much the most awesome thing ever. I can't believe this computer is for real, and not a 50-pound tower under my desk.) What might have been going on in IIS that doesn't happen in Visual Studio? In the debugging process, I realized that I might be looking in the wrong place. POP Forums creates a Ninject container using a method called from a PreApplicationStartMethod attribute, and at that time registers a module (what Ninject uses to map interfaces to implementations) that maps all of the core dependencies. It also creates an instance of an HttpModule that originally hosted the "services" (search indexing, mailer, etc.), but now just records errors. That's all well and good, but the actual repository mapping, where data is actually read or persisted, happens in Application_Start() in global.asax. The idea there is that you can swap out the SqlSingleWebServer repos for something tuned for multiple servers, Oracle or something else. Of course, if I used something like StructureMap, which does convention-based mapping for dependency injection (a class implementing ISettingsRepository called SettingsRepository is automagically mapped), I wouldn't have to worry about it. In any case, the HttpModule, being instantiated before Application_Start() gets to run, would throw because there was no repo mapped where it could get settings from the database. This makes total sense. The fix is sort of a hack, where I don't setup the innards of the HttpModule until a call to its BeginRequest is made. I say it's a hack, because its primary function, logging exceptions, won't work until the app has warmed up. Still, this brings up an interesting question about the race condition, and what changed in 4.5 when it's running in IIS. In ASP.NET 4, it would appear that the code called via the PreApplicationStartMethod was either failing silently, and running again later, or it was getting to that code after Application_Start was called. In any case, weird thing. The real pain point I'm experiencing now is a bug in MVC 4 that is extremely serious because it renders the mobile/alternate view functionality very much broken.

    Read the article

  • Customizing Spaces UI

    - by vijaykumar.yenne
    In most common scenarios we stumble up on use cases to customize the Web center spaces UI. Is the Spaces UI customizable? What is the extent to which we can customize? How do i customize it? These are some questions that developers/architects normally come across. Well to clear the air, OOTB spaces comes with some default "site templates" and it also gives a flexibility to create custom site templates suiting the organization needs. The site templates concept has been introduced in the latest PS1 release of webcenter and to customize/create the the new site template, we have to leverage the Extend Spaces Project available on OTN. You could download the the project from here. Also there is white paper available on what all can be customized/extended from spaces perspective listed here . There is a specific details outlined on how to create custom site template in the Customizing Site Template white paper. One of the things the white paper high lights is "While you can create new site templates and modify the sample site templates but you cannot modify either of the out-of-the-box site templates ie the default and maximized. So if my need is to either increase the size of header to fit in a bigger logo or introduce couple of extra links on the default/maximized lay out how do i achieve this? All you need to do is customize the OOTB shell (shell-config.xml). 1. Copy the shell config's available in the Source Files Directory of the extended spaces unzipped directory into the CustomSite Template Project ExtendWebCenterSpaces\CustomSiteTemplate\custom\oracle\webcenter\webcenterapp\metadata\shell 2. Modify the appropriate shell 3. Deploy the CustomSite Template as ADF Jar 4. ensure you have the profile dependency on the aboproject int he custom webcenter spaces project 5. Deploy the Spaces Extension on the Webcenter Spaces Instance. (Details in the first white paper). You should see the changes immediately. eg: In the default shell, i have changed the height from 30 to 60 to increase the header size height="60" This is what i get to see : If you have worked on the R1 release time frame, where you created a custom shell/chrome, how do we make them compatible and make it available in the Spaces PS1 instance? All you need to do is the following: 1. Copy the custom shell in to the shell directory of the custom site template project 2. Register the shell with WCSiteTemplates.xml available in the same project. Eg : Yo can add the below entry pagePath="/oracle/webcenter/webcenterapp/view/templates/MyShellTemplate.jspx" pageDefPath="/oracle/webcenter/webcenterapp/bindings/pageDefs/oracle_webcenter_webcenterapp_view_templates_WebCenterAppShellTemplatePageDef.xml" displayName="myShell" chromeLevel="myShell"/ Note : pagePath - Absolute path of the template JSPX file. This path must be unique. So you might have to do the following to get your custom chrome working absolutely fine with no problems at all: 1. Create a jspx page, say /custom/mysite/SiteTemplate.jspx 2. Include the the default jspx in the new site template like following SiteTemplate.jspx ------------------ 3. Add the newly created site template in the WCSiteTemplate.xml file like following - pagePath="/custom/mysite/SiteTemplate.jspx" pageDefPath="/oracle/webcenter/webcenterapp/bindings/pageDefs/oracle_webcenter_webcenterapp_view_templates_WebCenterAppShellTemplatePageDef.xml" displayName="myShell" chromeLevel="myShell"/

    Read the article

  • JDBC Triggers

    - by Tim Dexter
    Received a question from a customer last week, they were using the new rollup patch on top of 10.1.3.4.1. What are these boxes for? Don't you know? Surely? Well, they are for ... that new functionality, you know it's in the user docs, that thingmabobby doodah. OK, I dont know either, I can have a guess but let me check first. Serveral IM sessions, emails and a dig through the readme for the new patch and I had my answer. Its not in the official documentation, yet. Leslie is on the case. The two fields were designed to allow an Admin to set a users context attributes before a connection is made to a database and for un-setting the attributes after the connection is broken by the extraction engine. We got a sample from the Enterprise Manager team on how they will be using it with their VPD connections. FUNCTION bip_to_em_user (user_name_in IN VARCHAR2) RETURN BOOLEAN IS BEGIN SETEMUSERCONTEXT(user_name_in, MGMT_USER.OP_SET_IDENTIFIER); return TRUE; END bip_to_em_user; And used in the jdbc data source definition like this (pre-process function): sysman.mgmt_bip.bip_to_em_user(:xdo_user_name) You, of course can call any function that is going to return a boolean value, another example might be. FUNCTION set_per_process_username (username_in IN VARCHAR2) RETURN BOOLEAN IS BEGIN SETUSERCONTEXT(username_in); return TRUE; END set_per_process_username Just use your own function/package to set some user context. Very grateful for the mail from Leslie on the EM team's usage but I had to try it out. Rather than set up a VPD, I opted for a simpler test. Can I log the comings and goings of users and their queries using the same pre-process text box. Reaching back into the depths of my developer brain to remember some pl/sql, it was not that deep and I came up with: CREATE OR REPLACE FUNCTION BIPTEST (user_name_in IN VARCHAR2, smode IN VARCHAR2) RETURN BOOLEAN AS BEGIN INSERT INTO LOGTAB VALUES(user_name_in, sysdate,smode); RETURN true; END BIPTEST; To call it in the pre-fetch trigger. BIPTEST(:xdo_user_name) Not going to set the pl/sql world alight I know, but you get the idea. As a new connection is made to the database its logged in the LOGTAB table. The SMODE value just sets if its an entry or an exit. I used the pre- and post- boxes. NAME UPDATE_DATE S_FLAG oracle 14-MAY-10 09.51.34.000000000 AM Start oracle 14-MAY-10 10.23.57.000000000 AM Finish administrator 14-MAY-10 09.51.38.000000000 AM Start administrator 14-MAY-10 09.51.38.000000000 AM Finish oracle 14-MAY-10 09.51.42.000000000 AM Start oracle 14-MAY-10 09.51.42.000000000 AM Finish It works very well, I had some fun trying to find a nasty query for the extraction engine so that the timestamps from in to out actually had a difference. That engine is fast! The only derived value you can pass from BIP is :xdo_user_name. None of the other server values are available. Connection pools are not currently supported but planned for a future release. Now you know what those fields are for and look for some official documentation, rather than my ramblings, coming soon!

    Read the article

  • Using progress dialog in Visual Studio extensions

    - by Utkarsh Shigihalli
    Originally posted on: http://geekswithblogs.net/onlyutkarsh/archive/2014/05/23/using-progress-dialog-in-visual-studio-extensions.aspxAs a Visual Studio extension developer you are required to keep the aesthetics of Visual Studio in tact when you integrate your extension with Visual Studio. Your extension looks odd when you try to use windows controls and dialogs in your extensions. Visual Studio SDK exposes many interfaces so that your extension looks as integrated with Visual Studio as possible. When your extension is performing a long running task, you have many options to notify the progress to the user. One such option is through Visual Studio status bar. I have previously blogged about displaying progress through Visual Studio status bar. In this blog post I am going to highlight another way using IVsThreadedWaitDialog2 interface. One thing to note is, as the IVsThreadedWaitDialog2 interface name suggests it is a dialog hence user cannot perform any action when the dialog is being shown. So Visual Studio seems responsive to user, even when a task is being performed. Visual Studio itself makes use of this interface heavily. One example is when you are loading a solution (.sln) with lot of projects Visual Studio displays dialog implemented by this interface (screenshot below). So the first step is to get the instance of IVsThreadedWaitDialog2 interface using IServiceProvider interface. var dialogFactory = _serviceProvider.GetService(typeof(SVsThreadedWaitDialogFactory)) as IVsThreadedWaitDialogFactory; IVsThreadedWaitDialog2 dialog = null; if (dialogFactory != null) { dialogFactory.CreateInstance(out dialog); } So if your have the package initialized properly out object dialog will be not null and would contain the instance of IVsThreadedWaitDialog2 interface. Once the instance is got, you call the different methods to manage the dialog. I will cover 3 methods StartWaitDialog, EndWaitDialog and HasCanceled in this blog post. You show the progress dialog as below. if (dialog != null && dialog.StartWaitDialog( "Threaded Wait Dialog", "VS is Busy", "Progress text", null, "Waiting status bar text", 0, false, true) == VSConstants.S_OK) { Thread.Sleep(4000); } As you can see from the method syntax it is very similar to standard windows message box. If you pass true to the 7th parameter to StartWaitDialog method, you will also see a cancel button allowing user to cancel the running task. You can react when user cancels the task as below. bool isCancelled; dialog.HasCanceled(out isCancelled); if (isCancelled) { MessageBox.Show("Cancelled"); } Finally, you can close the dialog when you complete the task running as below. int usercancel; dialog.EndWaitDialog(out usercancel); To help you quickly experience the above code, I have created a sample. It is available for download from GitHub. The sample creates a tool window with two buttons to demo the above explained scenarios. The tool window can be accessed by clicking View –> Other Windows -> ProgressDialogDemo Window

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • ArcGIS–Getting the Legend Labels out

    - by Avner Kashtan
    Working with ESRI’s ArcGIS package, especially the WPF API, can be confusing. There’s the REST API, the SOAP APIs, and the WPF classes themselves, which expose some web service calls and information, but not everything. With all that, it can be hard to find specific features between the different options. Some functionality is handed to you on a silver platter, while some is maddeningly hard to implement. Today, for instance, I was working on adding a Legend control to my map-based WPF application, to explain the different symbols that can appear on the map. This is how the legend looks on ESRI’s own map-editing tools:   but this is how it looks when I used the Legend control, supplied out of the box by ESRI:   Very pretty, but unfortunately missing the option to display the name of the fields that make up the symbology. Luckily, the WPF controls have a lot of templating/extensibility points, to allow you to specify the layout of each field: 1: <esri:Legend> 2: <esri:Legend.MapLayerTemplate> 3: <DataTemplate> 4: <TextBlock Text="{Binding Layer.ID}"/> 5: </DataTemplate> 6: </esri:Legend.MapLayerTemplate> 7: </esri:Legend> but that only replicates the same built in behavior. I could now add any additional fields I liked, but unfortunately, I couldn’t find them as part of the Layer, GraphicsLayer or FeatureLayer definitions. This is the part where ESRI’s lack of organization is noticeable, since I can see this data easily when accessing the ArcGis Server’s web-interface, but I had no idea how to find it as part of the built-in class. Is it a part of Layer? Of LayerInfo? Of the LayerDefinition class that exists only in the SOAP service? As it turns out, neither. Since these fields are used by the symbol renderer to determine which symbol to draw, they’re actually a part of the layer’s Renderer. Since I already had a MyFeatureLayer class derived from FeatureLayer that added extra functionality, I could just add this property to it: 1: public string LegendFields 2: { 3: get 4: { 5: if (this.Renderer is UniqueValueRenderer) 6: { 7: return (this.Renderer as UniqueValueRenderer).Field; 8: } 9: else if (this.Renderer is UniqueValueMultipleFieldsRenderer) 10: { 11: var renderer = this.Renderer as UniqueValueMultipleFieldsRenderer; 12: return string.Join(renderer.FieldDelimiter, renderer.Fields); 13: } 14: else return null; 15: } For my scenario, all of my layers used symbology derived from a single field or, as in the examples above, from several of them. The renderer even kindly supplied me with the comma to separate the fields with. Now it was a simple matter to get the Legend control in line – assuming that it was bound to a collection of MyFeatureLayer: 1: <esri:Legend> 2: <esri:Legend.MapLayerTemplate> 3: <DataTemplate> 4: <StackPanel> 5: <TextBlock Text="{Binding Layer.ID}"/> 6: <TextBlock Text="{Binding Layer.LegendFields}" Margin="10,0,0,0" TextStyle="Italic"/> 7: </StackPanel> 8: </DataTemplate> 9: </esri:Legend.MapLayerTemplate> 10: </esri:Legend> and get the look I wanted – the list of fields below the layer name, indented.

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Running a Test

    - by StuartBrierley
    The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected.  It should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment. For details on installing the Benchmark Wizard see my previous post. The BizTalk Benchmarking Wizard provides two simple test scenarios, one for messaging and one for Orchestrations, which can be used to test your BizTalk implementation. Messaging Loadgen generates a new XML message and sends it over NetTCP A WCF-NetTCP Receive Location receives a the xml document from Loadgen. The PassThruReceive pipeline performs no processing and the message is published by the EPM to the MessageBox. The WCF One-Way Send Port, which is the only subscriber to the message, retrieves the message from the MessageBox The PassThruTransmit pipeline provides no additional processing The message is delivered to the back end WCF service by the WCF NetTCP adapter Orchestrations Loadgen generates a new XML message and sends it over NetTCP A WCF-NetTCP Receive Location receives a the xml document from Loadgen. The XMLReceive pipeline performs no processing and the message is published by the EPM to the MessageBox. The message is delivered to a simple Orchestration which consists of a receive location and a send port The WCF One-Way Send Port, which is the only subscriber to the Orchestration message, retrieves the message from the MessageBox The PassThruTransmit pipeline provides no additional processing The message is delivered to the back end WCF service by the WCF NetTCP adapter Below is a quick outline of how to run the BizTalk Benchmark Wizard on a single server, although it should be noted that this is not ideal as this server is then both generating and processing the load.  In order to separate this load out you should run the "Indigo" service on a seperate server. To start the BizTalk Benchmark Wizard click Start > All Programs > BizTalk Benchmark Wizard > BizTalk Benchmark Wizard. On this screen click next, you will then get the following pop up window. Check the server and database names and check the "check prerequsites" check-box before pressing ok.  The wizard will then check that the appropriate test scenarios are installed. You should then choose the test scenario that wish to run (messaging or orchestration) and the architecture that most closely matches your environment. You will then be asked to confirm the host server for each of the host instances. Next you will be presented with the prepare screen.  You will need to start the indigo service before pressing the Test Indigo Service Button. If you are running the indigo service on a separate server you can enter the server name here.  To start the indigo service click Start > All Programs > BizTalk Benchmark Wizard > Start Indigo Service.   While the test is running you will be presented with two speed dial type displays - one for the received messages per second and one for the processed messages per second. The green dial shows the current rate and the red dial shows the overall average rate.  Optionally you can view the CPU usage of the various servers involved in processing the tests. For my development environment I expected low results and this is what I got.  Although looking at the online high scores table and comparing to the quad core system listed, the results are perhaps not really that bad. At some time I may look at what improvements I can make to this score, but if you are interested in that now take a look at Benchmark your BizTalk Server (Part 3).

    Read the article

  • ASP.NET MVC 3 Client-Side Validation Summary with jQuery Validation (Unobtrusive JavaScript)

    - by Soe Tun
    When we were working with ASP.NET MVC 2, we needed to write our own JavaScript to get Client-Side Validation Summary with jQuery Validation plugin. I am one of those unfortunate people still stuck with .NET Framework Runtime 2.0 and .NET Framework 3.5; meaning I am still on ASP.NET MVC 2. So I will still keep on supporting by answering any question you may have with my original code.   Long awaited ASP.NET MVC 3 has been released, and it supports Client Side Validation Summary with jQuery out-of-the-box with new features like Unobtrusive JavaScript.   1. _Layout.cshtml Template Notice that I am using Protocol Relative URLs ( i.e., '//'.  Not 'http://' or 'https://' ) to reference script files and css files and you should use it too like that! However, please note that IE7 and IE8 will download the CSS files twice so use it with judgement. <!DOCTYPE html> <html> <head> <title>@ViewBag.Title</title> <link href="@Url.Content("~/Assets/Site.css")" rel="stylesheet" type="text/css" /> <link href="//ajax.googleapis.com/ajax/libs/jqueryui/1.8.9/themes/redmond/jquery-ui.css" rel="stylesheet" type="text/css" media="screen" /> </head> <body> @RenderBody() <script src="//ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js" type="text/javascript"></script> <script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.8.9/jquery-ui.min.js" type="text/javascript"></script> <script src="//ajax.microsoft.com/ajax/jQuery.Validate/1.7/jQuery.Validate.min.js" type="text/javascript"></script> <script src="//ajax.aspnetcdn.com/ajax/mvc/3.0/jquery.validate.unobtrusive.min.js" type="text/javascript"></script> </body> </html>   2. MVC View Template There are 3 things you *must* do exactly to get Client Side Validation Summary working. (1)  You must declare your Validation Summary **inside** the `Html.BeginForm()` block like below. (2)  You must pass `excludePropertyErrors: false` to the  Html.ValidationSummary()  method. @using (Html.BeginForm()) { @Html.ValidationSummary(false, "Please fix these errors."); <!-- The rest of your View Template --> }   (3)  You have to put the following two elements in the `<appSettings />` block of your Web.config file. <add key="ClientValidationEnabled" value="true"/> <add key="UnobtrusiveJavaScriptEnabled" value="true"/>   That is all you need to do.  Simple, right? I will upload a sample project for download soon.  Please let me know if you run into some issues.     P.S: Without getting into too much technical details, I just wanted to let you know what I went through to get this to work. I had to look into the ASP.NET MVC 3 RTM Source Code and the jquery.validate.unobtrusive.js source. Initially, I thought I have to hack the jquery.validate.unobtrusive.js or something to get this to work. But after digging into MVC3 RTM source, I found out how to do it.

    Read the article

  • Make Text and Images Easier to Read with the Windows 7 Magnifier

    - by DigitalGeekery
    Do you have impaired vision or find it difficult to read small print on your computer screen? Today, we’ll take a closer look at how to magnify that hard to read content with the Magnifier in Windows 7. Magnifier was available in previous versions of Windows, but the Windows 7 version comes with some notable improvements. There are now three screen modes in Magnifier. Full Screen and Lens mode, however, require Windows Aero to be enabled. If your computer doesn’t support Aero, or if you’re not using am Aero theme, Magnifier will only work in Docked mode. Using Magnifier in Windows 7 You can find the Magnifier by going to Start > All Programs > Accessories > Ease of Access > Magnifier.   Alternately, you can type magnifier into the Search box in the Start Menu and hit Enter. On the Magnifier toolbar, choose your View mode by clicking Views and choosing from the available options. Clicking the plus (+) and minus (-) buttons will zoom in or zoom out. You can change the zoom in/out percentage by adjusting the slider bar. You can also enable color inversion and select tracking options. Click OK when finished to save your settings.   After a brief period, the Magnifier Toolbar will switch to a magnifying glass icon. Simply click the magnifying glass to display the Magnifier Toolbar again.   Docked Mode In Docked mode, a portion of the screen is magnified and docked at the top of the screen. The rest of your desktop will remain in it’s normal state. You can then control which area of the screen is magnified by moving your mouse.   Full Screen Mode This magnifies your entire screen and follows your mouse as you move it around. If you loose track of where you are on the screen, use the Ctrl + Alt + Spacebar shortcut to preview where your mouse pointer is on the screen.   Lens Mode The Lens screen mode is similar to holding a magnifying glass up to your screen. Full screen mode magnifies the area around the mouse. The magnified area moves around the screen with your mouse.    Shortcut Keys Windows key + (+) to zoom in Windows key + (-) to zoom out Windows key + ESC to exit Ctrl + Alt + F – Full screen mode Ctrl + Alt + L – Lens mode Ctrl + Alt + D – Dock mode Ctrl + Alt + R – Resize the lens Ctrl + Alt + Spacebar – Preview full screen Conclusion Windows Magnifier is a nice little tool if you have impaired vision or just need to make items on the screen easier to read. Similar Articles Productive Geek Tips New Features in WordPad and Paint in Windows 7How-To Geek on Lifehacker: How to Make Windows Vista Less AnnoyingUsing Comments in Word 2007 DocumentsMake Your PC Look Like Windows Phone 7Use Image Placeholders to Display Documents Faster in Word TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide

    Read the article

  • New Release of Oracle EPM (Enterprise Performance Management)

    - by Theresa Hickman
    I'm a huge fan of Hyperion products and consider Hyperion to be one of the best acquisitions Oracle has made in terms of applications. So I am really excited to talk about their latest release, Release 11.1.2 of the Oracle EPM System. This is EPM's largest release in 2 years, and it's jam-packed with new modules and features. In terms of brand new products, there are three: 1. Public Sector Planning and Budgeting meets the needs of public sector agencies, higher education, governments, etc. that have complex budget requirements. It supports position or employee-based budgeting and integrates with MS Office and your ERP ledgers to perform commitment control. 2. Hyperion Financial Close Management is a complete financial close solution that orchestrates the entire close process from subledgers and general ledger to financial reporting and disclosure submissions. And of course, it is integrated with GL systems and consolidation systems. I saw a demo of this and it looked pretty slick. They have this unified close calendar that looks like a regular calendar that gives each person participating in the close process a task list. It comes with a Gantt chart that shows the relationships and dependencies among closing tasks. There are dashboards to allow you to track the close progress and completion of tasks as well as perform trend analysis and see how much time is being spent on different activities in the close process. This gives you visibility that you never had before to understand where the bottlenecks are and where improvements could be made. I think what I liked best about this product was that it provides a central place for all participants to communicate their progress. When I worked as an Accountant, we used ad hoc tools, such as spreadsheets, Word documents, emails, and phone calls during the close process. I like the idea of having a central system to track the overall progress as well as automate the entire financial close process. Who knows, maybe Accountants won't have to revolve their lives around the month end close anymore with a tool like this. Those periodic fire drills can become predictable, well managed processes. 3. Disclosure Management is an out-of-the-box, pre-packaged XBRL solution to meet statutory reporting requirements. This product is really going to help companies improve the timeliness of producing financial reports. Reports can be authored using MS Word and Excel and then XBRL instance documents can be produced with its embedded XBRL tags. It even supports footnotes and disclosures of non-financial information. With a product like this, companies no longer have to outsource their XBRL filing; they can bring it back in house to save costs and time. In terms of other enhancements, they have ERP Integrator that provides integration and drill downs from Hyperion products to source systems, such as Oracle E-Business Suite, PeopleSoft, and SAP. No other vendor offers this level of integration. There's also a new product that links Oracle Essbase directly to Hyperion Financial Management for internal financial reporting, and new integrations between Hyperion Financial Management and Oracle's GRC products. They also improved the usability of Oracle Hyperion Planning. They made it much easier for end users to use the system via the web or via MS Excel when submitting plans and budgets. It is also integrated with intelligent approval workflows that are data-driven, user-configurable, and scenario-specific to efficiently streamline the budgeting process. Here's the press release from April 7, 2010. Here's the pre-recorded web cast where you can see the demos. Just register and watch the hour long presentation. And finally, here's the newsletter

    Read the article

  • MySQL for Excel 1.3.0 Beta has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.3.0.  This is a beta release for 1.3.x. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel As this is a beta version the MySQL for Excel product can be downloaded only by using the product standalone installer at this link http://dev.mysql.com/downloads/windows/excel/ Your feedback on this beta version is very well appreciated, you can raise bugs on the MySQL bugs page or give us your comments on the MySQL for Excel forum. Changes in MySQL for Excel 1.3.0 (2014-06-06, Beta) This section documents all changes and bug fixes applied to MySQL for Excel since the release of 1.2.1. Several new features were added, for more information see What Is New In MySQL for Excel (http://dev.mysql.com/doc/refman/5.6/en/mysql-for-excel-what-is-new.html). Known limitations: Upgrading from versions MySQL for Excel 1.2.0 and lower is not possible due to a bug fixed in MySQL for Excel 1.2.1. In that scenario, the old version (MySQL for Excel 1.2.0 or lower) must be uninstalled first. Upgrading from version 1.2.1 works correctly. <CTRL> + <A> cannot be used to select all database objects. Either <SHIFT> + <Arrow Key> or <CTRL> + click must be used instead. PivotTables are normally placed to the right (skipping one column) of the imported data, they will not be created if there is another existing Excel object at that position. Functionality Added or Changed Imported data can now be refreshed by using the native Refresh feature. Fields in the imported data sheet are then updated against the live MySQL database using the saved connection ID. Functionality was added to import data directly into PivotTables, which can be created from any Import operation. Multiple objects (tables and views) can now be imported into Excel, when before only one object could be selected. Relational information is also utilized when importing multiple objects. All options now have descriptive tooltips. Hovering over an option/preference displays helpful information about its use. A new Export Data, Advanced Options option was added that shows all available data types in the Data Type combo box, instead of only showing a subset of the most popular data types. The option dialogs now include a Refresh to Defaults button that resets the dialog's options to their defaults values. Each option dialog is set individually. A new Add Summary Fields for Numeric Columns option was added to the Import Data dialog that automatically adds summary fields for numeric data after the last row of the imported data. The specific summary function is selectable from many options, such as "Total" and "Average." A new collation option was added for the schema and table creation wizards. The default schema collation is "Server Default", and the default table collation is "Schema Default". Collation options may be selected from a drop-down list of all available collations. Quick links: MySQL for Excel documentation: http://dev.mysql.com/doc/en/mysql-for-excel.html. MySQL on Windows blog: http://blogs.oracle.com/MySQLOnWindows. MySQL for Excel forum: http://forums.mysql.com/list.php?172. MySQL YouTube channel: http://www.youtube.com/user/MySQLChannel. Enjoy and thanks for the support! 

    Read the article

  • Browser History ASP.Net AJAX: Microsoft.Web.Preview

    - by Narendra Tiwari
    I remember in 2006 we were working on a portal for our client Venetian, Las Vegas and the portal is full of AJAX features. One of my friend facing a challange to retain browser history with all AJAX operation. In terms of user experience it is an important aspect which could not be avoided in that scenario. Well that time we have made some workarounds to achieve the same but that may not be the perfect solution. Ok.. Now with Microsoft AJAX there are a lot of such features can be achieved with optimum efficiency. Microsoft AJAX has grown its features over the past few years. Microsoft.Web.Preview.dll is an addon in conjunction with ASP.Net AJAX. It contains a control named "History" for that purpose. Source code:- http://download.microsoft.com/download/8/3/1/831ffcd7-c571-4075-b8fa-6ff678794f60/CS-ASP-ASPBrowserHistoryinAJAX_cs.zip Below is a small sample to demonstrate the control. 1/ Get dll from the above source code bin, and add reference to your web application. 2/ Rightclick on toolbox panel and Choose Item, browse assembly. now you will be able to see History control. 3/ Add below section group in web.config under <configSections> <sectionGroup name="microsoft.web.preview" type="Microsoft.Web.Preview.Configuration.PreviewSectionGroup, Microsoft.Web.Preview"> <section name="search" type="Microsoft.Web.Preview.Configuration.SearchSection, Microsoft.Web.Preview" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="searchSiteMap" type="Microsoft.Web.Preview.Configuration.SearchSiteMapSection, Microsoft.Web.Preview" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="diagnostics" type="Microsoft.Web.Preview.Configuration.DiagnosticsSection, Microsoft.Web.Preview" requirePermission="false" allowDefinition="MachineToApplication"/> </sectionGroup> 4/ Now create a simple webpage a textbox (txt1), button (btn1)  in an updatePanel with History control (History1). We will fill in text box and post the fom by clicking button a few times then verify if the browse history is retained. Remember button and textbox must be inside UpdatePanel and History control outside the UpdatePanel. <%@Page Language="C#" AutoEventWireup="true" CodeFile="History.aspx.cs" Inherits="History" %> <%@ Register Assembly="Microsoft.Web.Preview" Namespace="Microsoft.Web.Preview.UI.Controls" TagPrefix="cc1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true"></asp:ScriptManager> <div> <cc1:History ID="History1" runat="server" OnNavigate="History1_Navigate"> </cc1:History> <asp:UpdatePanel ID="up1" runat="server"> <ContentTemplate> <asp:TextBox ID="txt1" runat="server"></asp:TextBox><br /> <asp:Button ID="btn1" runat="server" Text="Test" OnClick="btn1_Click" /> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="History1" /> </Triggers> </asp:UpdatePanel> </div> </form> </body> </html> 5/ Below code to add the textbox value in history everytime we post back using btn1 click.  protected void btn1_Click(object sender, EventArgs e) { History1.AddHistoryPoint("txtState",txt1.Text); } 6/ and finally Navigate event of History control protected void History1_Navigate(object sender, Microsoft.Web.Preview.UI.Controls.HistoryEventArgs args) { string strState = string.Empty; if (args.State.ContainsKey("txtState")) { strState = (string)args.State["txtState"]; } txt1.Text = strState; } Now all set to go :) Reference: http://www.dotnetglobe.com/2008/08/using-asp.html

    Read the article

  • Why your Netapp is so slow...

    - by Darius Zanganeh
    Have you ever wondered why your Netapp FAS box is slow and doesn't perform well at large block workloads?  In this blog entry I will give you a little bit of information that will probably help you understand why it’s so slow, why you shouldn't use it for applications that read and write in large blocks like 64k, 128k, 256k ++ etc..  Of course since I work for Oracle at this time, I will show you why the ZS3 storage boxes are excellent choices for these types of workloads. Netapp’s Fundamental Problem The fundamental problem you have running these workloads on Netapp is the backend block size of their WAFL file system.  Every application block on a Netapp FAS ends up in a 4k chunk on a disk. Reference:  Netapp TR-3001 Whitepaper Netapp has proven this lacking large block performance fact in at least two different ways. They have NEVER posted an SPC-2 Benchmark yet they have posted SPC-1 and SPECSFS, both recently. In 2011 they purchased Engenio to try and fill this GAP in their portfolio. Block Size Matters So why does block size matter anyways?  Many applications use large block chunks of data especially in the Big Data movement.  Some examples are SAS Business Analytics, Microsoft SQL, Hadoop HDFS is even 64MB! Now let me boil this down for you.  If an application such MS SQL is writing data in a 64k chunk then before Netapp actually writes it on disk it will have to split it into 16 different 4k writes and 16 different disk IOPS.  When the application later goes to read that 64k chunk the Netapp will have to again do 16 different disk IOPS.  In comparison the ZS3 Storage Appliance can write in variable block sizes ranging from 512b to 1MB.  So if you put the same MSSQL database on a ZS3 you can set the specific LUNs for this database to 64k and then when you do an application read/write it requires only a single disk IO.  That is 16x faster!  But, back to the problem with your Netapp, you will VERY quickly run out of disk IO and hit a wall.  Now all arrays will have some fancy pre fetch algorithm and some nice cache and maybe even flash based cache such as a PAM card in your Netapp but with large block workloads you will usually blow through the cache and still need significant disk IO.  Also because these datasets are usually very large and usually not dedupable they are usually not good candidates for an all flash system.  You can do some simple math in excel and very quickly you will see why it matters.  Here are a couple of READ examples using SAS and MSSQL.  Assume these are the READ IOPS the application needs even after all the fancy cache and algorithms.   Here is an example with 128k blocks.  Notice the numbers of drives on the Netapp! Here is an example with 64k blocks You can easily see that the Oracle ZS3 can do dramatically more work with dramatically less drives.  This doesn't even take into account that the ONTAP system will likely run out of CPU way before you get to these drive numbers so you be buying many more controllers.  So with all that said, lets look at the ZS3 and why you should consider it for any workload your running on Netapp today.  ZS3 World Record Price/Performance in the SPC-2 benchmark ZS3-2 is #1 in Price Performance $12.08ZS3-2 is #3 in Overall Performance 16,212 MBPS Note: The number one overall spot in the world is held by an AFA 33,477 MBPS but at a Price Performance of $29.79.  A customer could purchase 2 x ZS3-2 systems in the benchmark with relatively the same performance and walk away with $600,000 in their pocket.

    Read the article

  • Using 3G/UMTS in Mauritius

    After some conversation, threads in online forum and mailing lists I thought about writing this article on how to setup, configure and use 3G/UMTS connections on Linux here in Mauritius. Personally, I can only share my experience with Emtel Ltd. but try to give some clues about how to configure Orange as well. Emtel 3G/UMTS surf stick Emtel provides different surf sticks from Huawei. Back in 2007, I started with an E220 that wouldn't run on Windows Vista either. Nowadays, you just plug in the surf stick (ie. E169) and usually the Network Manager will detect the new broadband modem. Nothing to worry about. The Linux Network Manager even provides a connection profile for Emtel here in Mauritius and establishing the Internet connection is done in less than 2 minutes... even quicker. Using wvdial Old-fashioned Linux users might not take Network Manager into consideration but feel comfortable with wvdial. Although that wvdial is primarily used with serial port attached modems, it can operate on USB ports as well. Following is my configuration from /etc/wvdial.conf: [Dialer Defaults]Phone = *99#Username = emtelPassword = emtelNew PPPD = yesStupid Mode = 1Dial Command = ATDT[Dialer emtel]Modem = /dev/ttyUSB0Baud = 3774000Init2 = ATZInit3 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0Init4 = AT+cgdcont=1,"ip","web"ISDN = 0Modem Type = Analog Modem The values of user name and password are optional and can be configured as you like. In case that your SIM card is protected by a pin - which is highly advised, you might another dialer section in your configuration file like so: [Dialer pin]Modem = /dev/ttyUSB0Init1 = AT+CPIN=0000 This way you can "daisy-chain" your command to establish your Internet connection like so: wvdial pin emtel And it works auto-magically. Depending on your group assignments (dialout), you might have to sudo the wvdial statement like so: sudo wvdial pin emtel Orange parameters As far as I could figure out without really testing it myself, it is also necessary to set the Access Point (AP) manually with Orange. Well, although it is pretty obvious a lot of people seem to struggle. The AP value is "orange". [Dialer orange]Modem = /dev/ttyUSB0Baud = 3774000Init2 = ATZInit3 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0Init4 = AT+cgdcont=1,"ip","orange"ISDN = 0Modem Type = Analog Modem And you are done. Official Linux support from providers It's just simple: Forget it! The people at the Emtel call center are completely focused on the hardware and Mobile Connect software application provided by Huawei and are totally lost in case that you confront them with other constellations. For example, my wife's netbook has an integrated 3G/UMTS modem from Ericsson. Therefore, no need to use the Huawei surf stick at all and of course we use the existing software named Wireless Manager instead of. Now, imagine to mention at the help desk: "Ehm, sorry but what's Mobile Connect?" And Linux after all might give the call operator sleepless nights... Who knows? Anyways, I hope that my article and configuration could give you a helping hand and that you will be able to connect your Linux box with 3G/UMTS surf sticks here in Mauritius.

    Read the article

  • LibGDX Box2D Body and Sprite AND DebugRenderer out of sync

    - by Free Lancer
    I am having a couple issues with Box2D bodies. I have a GameObject holding a Sprite and Body. I use a ShapeRenderer to draw an outline of the Body's and Sprite's bounding boxes. I also added a Box2DDebugRenderer to make sure everything's lining up properly. My problem is the Sprite and Body at first overlap perfectly, but as I turn the Body moves a bit off the sprite then comes back when the Car is facing either North or South. Here's an image of what I mean: (Not sure what that line is, first time to show up) BLUE is the Body, RED is the Sprite, PURPLE is the Box2DDebugRenderer. Also, you probably noticed a purple square in the top right corner. Well that's the Car drawn by the Box2D Debug Renderer. I thought it might be the camera but I've been playing with the Cameras for hours and nothing seems to work. All give me weird results. Here's my code: Screen: public void show() { // --------------------- SETUP ALL THE CAMERA STUFF ------------------------------ // battleStage = new Stage( 720, 480, false ); // Setup the camera. In Box2D we operate on a meter scale, pixels won't do it. So we use // an Orthographic camera with a Viewport of 24 meters in width and 16 meters in height. battleStage.setCamera( new OrthographicCamera( CAM_METER_WIDTH, CAM_METER_HEIGHT ) ); battleStage.getCamera().position.set( CAM_METER_WIDTH / 2, CAM_METER_HEIGHT / 2, 0 ); // The Box2D Debug Renderer will handle rendering all physics objects for debugging debugger = new Box2DDebugRenderer( true, true, true, true ); //debugCam = new OrthographicCamera( CAM_METER_WIDTH, CAM_METER_HEIGHT ); } public void render(float delta) { // Update the Physics World, use 1/45 for something around 45 Frames/Second for mobile devices physicsWorld.step( 1/45.0f, 8, 3 ); // 1/45 for devices // Set the Camera matrices and clear the screen Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); battleStage.getCamera().update(); // Draw game objects here battleStage.act(delta); battleStage.draw(); // Again update the Camera matrices and call the debug renderer debugCam.update(); debugger.render( physicsWorld, debugCam.combined); // Vehicle handles its own interaction with the HUD // update all Actors movements in the game Stage hudStage.act( delta ); // Draw each Actor onto the Scene at their new positions hudStage.draw(); } Car: (extends Actor) public Car( Texture texture, float posX, float posY, World world ) { super( "Car" ); mSprite = new Sprite( texture ); mSprite.setSize( WIDTH * Consts.PIXEL_METER_RATIO, HEIGHT * Consts.PIXEL_METER_RATIO ); mSprite.setOrigin( mSprite.getWidth()/2, mSprite.getHeight()/2); // set the origin to be at the center of the body mSprite.setPosition( posX * Consts.PIXEL_METER_RATIO, posY * Consts.PIXEL_METER_RATIO ); // place the car in the center of the game map FixtureDef carFixtureDef = new FixtureDef(); mBody = Physics.createBoxBody( BodyType.DynamicBody, carFixtureDef, mSprite ); } public void draw() { mSprite.setPosition( mBody.getPosition().x * Consts.PIXEL_METER_RATIO, mBody.getPosition().y * Consts.PIXEL_METER_RATIO ); mSprite.setRotation( MathUtils.radiansToDegrees * mBody.getAngle() ); // draw the sprite mSprite.draw( batch ); } Physics: (Create the Body) public static Body createBoxBody( final BodyType pBodyType, final FixtureDef pFixtureDef, Sprite pSprite ) { float pRotation = 0; float pWidth = pSprite.getWidth(); float pHeight = pSprite.getHeight(); final BodyDef boxBodyDef = new BodyDef(); boxBodyDef.type = pBodyType; boxBodyDef.position.x = pSprite.getX() / Consts.PIXEL_METER_RATIO; boxBodyDef.position.y = pSprite.getY() / Consts.PIXEL_METER_RATIO; // Temporary Box shape of the Body final PolygonShape boxPoly = new PolygonShape(); final float halfWidth = pWidth * 0.5f / Consts.PIXEL_METER_RATIO; final float halfHeight = pHeight * 0.5f / Consts.PIXEL_METER_RATIO; boxPoly.setAsBox( halfWidth, halfHeight ); // set the anchor point to be the center of the sprite pFixtureDef.shape = boxPoly; final Body boxBody = BattleScreen.getPhysicsWorld().createBody(boxBodyDef); boxBody.createFixture(pFixtureDef); } Sorry for all the code and long description but it's hard to pin down what exactly might be causing the problem.

    Read the article

  • Evaluating Solutions to Manage Product Compliance? Don’t Wait Much Longer

    - by Evelyn Neumayr
    By Kerrie Foy, Director PLM Product Marketing, Oracle Depending on severity, product compliance issues can cause various problems from run-away budgets to business closures. But effective policies and safeguards can create a strong foundation for innovation, productivity, market penetration and competitive advantage. If you’ve been putting off a systematic approach to product compliance, it is time to reconsider that decision. Why now?  No matter what industry, companies face a litany of worldwide and regional regulations that require proof of product compliance and environmental friendliness for market access.  For example, Restriction of Hazardous Substances (RoHS), a regulation that restricts the use of six dangerous materials used in the manufacture of electronic and electrical equipment, was originally adopted by the European Union in 2003 for implementation in 2006 and has evolved over time through various regional versions for North America, China, Japan, Korea, Norway and Turkey. In addition, the RoHS directive allowed for material exemptions used in Medical Devices, but that exemption ends in 2014. Additional regulations worth watching are the Battery Directive, Waste Electrical and Electronic Equipment (WEEE), and Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) directives. Additional regulations are expected from organizations such as the Food and Drug Administration in the US and similar organizations elsewhere. Meeting compliance requirements and also successfully investing in eco-friendly designs can be a major challenge. It may involve transforming business models, go-to-market strategies, supply networks, quality assurance policies and compliance processes.  Without a single source of truth for product data and without proper processes in place, ensuring product compliance burgeons into a crushing task that is cost-prohibitive and overwhelming.  However, the risk to consumer goodwill and satisfaction, revenue, business continuity, and market potential is too great not to solve the compliance challenge. Companies are beginning to adapt and thrive by implementing systematic approaches to product compliance that are more than functional bandages, they are revenue-generating engines. Consider working with Oracle to help you address your compliance needs. Many of the world’s most innovative leaders and pioneers are leveraging Oracle’s Agile Product Lifecycle Management (PLM) portfolio of enterprise applications to manage the product value chain, centralize product data, automate processes, and launch more eco-friendly products to market faster.   Particularly, the Agile Product Governance & Compliance (PG&C) solution provides out-of-the-box functionality to integrate actionable regulatory information into the enterprise product record from the ideation to the disposal/recycling phase.  Agile PG&C is a comprehensive solution that makes product compliance per corporate initiatives and regulations more reliable and efficient. Throughout product lifecycles, use the solution to support full material disclosures, gain rapid visibility into non-compliance issues, efficiently manage declarations with your suppliers, feed compliance data into a corrective action if a product must be changed, and swiftly satisfy audits by showing all due diligence tracked in one solution. Given the compounding regulation and consumer focus on urgent environmental issues, now is the time to act. Implementing an enterprise-wide systematic approach to product compliance is a competitive investment. From the start, Agile PG&C enables companies to confidently design for compliance and sustainability, reduce the cost of compliance, minimize the risk of business interruption, deliver responsible products, and inspire new innovation.  Don’t wait any longer! To find out more about Agile Product Governance & Compliance download the data sheet, contact your sales representative, or call Oracle at 1-800-633-0738.

    Read the article

  • Silverlight Cream for February 13, 2011 -- #1046

    - by Dave Campbell
    In this Issue: Loek van den Ouweland, Colin Eberhardt, Rudi Grobler, Joost van Schaik, Mike Taulty(-2-, -3-), Deborah Kurata, David Kelley, Peter Foot, Samuel Jack(-2-), and WindowsPhoneGeek(-2-). Above the Fold: Silverlight: "Silverlight Simple MVVM Commanding" Deborah Kurata WP7: "WP7 CustomInputPrompt control with Cancel button" WindowsPhoneGeek Expression Blend: "Silverlight Templated Image Button with two images" Loek van den Ouweland Shoutouts: Dave Campbell posted a write-up about the project he's on and the use of Sterling: Sterling Object-Oriented Database for ISO 1.0 Released!... Also see Jeremy Likness' post on the 1.0 release: Sterling Object-Oriented Database 1.0 RTM Not necessarily Silverlight, but darn cool, a great control by Sasha Barber: WPF : A Weird 3d based control snoutholder announced new content: Windows Phone 7 QuickStarts Live! From SilverlightCream.com: Silverlight Templated Image Button with two images Loek van den Ouweland has a video tutorial up for creating an ImageButton with a hover state... Expression Blend coolness, and check out the external links he has to their training site. Windows Phone 7 Performance Measurements – Emulator vs. Hardware Colin Eberhardt's latest is a popular post comparing performance metrics between the WP7 emulator and a real device. Mileage may vary, but I'm pretty sure the overall results are conculsive, and should help the way you view your app as you're building in the emulator. WP7: WebClient vs HttpWebRequest Rudi Grobler's latest is a discussion of WebClient and HttpWebRequest, gives coding examples of each plus discussion of why you may choose one over the other... and pay attention to his comment about mobile providers. A Blendable Windows Phone 7 / Silverlight clipping behavior Joost van Schaik posted this WP7/Silverlight clipping behavior he developed because all the other solutions were not blendable. Another really useful piece of code from Joost! Blend Bits 22–Being Stylish Mike Taulty has 3 more episodes in his Blend Bits series... first up is on one Styles... explicit, implicit, inheriting... you name it, he's covering it! Blend Bits 23–Templating Part 1 MIke Taulty then has the beginning of a series within his Blend Bits series on Templating. This is something you just have to either bite the bullet and go with Blend to do, or consume someone else's work. Mike shows us how to do it ourself by tweaking the visual aspects of a checkbox Blend Bits 24–Templating Part 2 In part 2 of the Templating series, Mike Taulty digs deeper into Blend and cracks open the Listbox control to take a bunch of the inner elements out for a spin... fun stuff and great tutorial, Mike! Silverlight Simple MVVM Commanding Deborah Kurata has another great MVVM post up... if you don't have your head wrapped around commanding yet, this is a good place to start that process... VB and C# as always. App Development for Windows Phone 7 101 David Kelley goes through the basics of producing a WP7 app both from the Silverlight and XNA side... good info and good external links to get you going. Copyable TextBlock for Windows Phone Peter Foot takes a look at the Copy/Paste functionality in WP7 and how to apply it to a TextBlock... which is NOT an out-of-the-box solution. How to deploy to, and debug, multiple instances of the Windows Phone 7 emulator Samuel Jack has a couple posts up this week... first is this clever one on running multiple copies of the emulator at once... too cool for debugging a multi-player game! Multi-player enabling my Windows Phone 7 game: Day 3 – The Server Side Samuel Jack's latest is a detailed look at his day 3 adventure of taking his multi-player game to WP7... lots of information and external links... what do you say, give him another day? :) WP7 CustomInputPrompt control with Cancel button WindowsPhoneGeek has a couple more posts up... first is this "CustomInputPrompt" control based off the InputPrompt from Coding4Fun. Implementing Windows Phone 7 DataTemplateSelector and CustomDataTemplateSelector In his latest post, WindowsPhoneGeek writes a DataTemplateSelector to allow different data templates for different list elements based on the type of the element. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Configuring Multiple Instances of MySQL in Solaris 11

    - by rajeshr
    Recently someone asked me for steps to configure multiple instances of MySQL database in an Operating Platform. Coz of my familiarity with Solaris OE, I prepared some notes on configuring multiple instances of MySQL database on Solaris 11. Maybe it's useful for some: If you want to run Solaris Operating System (or any other OS of your choice) as a virtualized instance in desktop, consider using Virtual Box. To download Solaris Operating System, click here. Once you have your Solaris Operating System (Version 11) up and running and have Internet connectivity to gain access to the Image Packaging System (IPS), please follow the steps as mentioned below to install MySQL and configure multiple instances: 1. Install MySQL Database in Solaris 11 $ sudo pkg install mysql-51 2. Verify if the mysql is installed: $ svcs -a | grep mysql Note: Service FMRI will look similar to the one here: svc:/application/database/mysql:version_51 3. Prepare data file system for MySQL Instance 1 zfs create rpool/mysql zfs create rpool/mysql/data zfs set mountpoint=/mysql/data rpool/mysql/data 4. Prepare data file system for MySQL Instance 2 zfs create rpool/mysql/data2 zfs set mountpoint=/mysql/data rpool/mysql/data2 5. Change the mysql/datadir of the MySQL Service (SMF) to point to /mysql/data $ svcprop mysql:version_51 | grep mysql/data $ svccfg -s mysql:version_51 setprop mysql/data=/mysql/data 6. Create a new instance of MySQL 5.1 (a) Copy the manifest of the default instance to temporary directory: $ sudo cp /lib/svc/manifest/application/database/mysql_51.xml /var/tmp/mysql_51_2.xml (b) Make appropriate modifications on the XML file $ sudo vi /var/tmp/mysql_51_2.xml - Change the "instance name" section to a new value "version_51_2" - Change the value of property name "data" to point to the ZFS file system "/mysql/data2" 7. Import the manifest to the SMF repository: $ sudo svccfg import /var/tmp/mysql_51_2.xml 8. Before starting the service, copy the file /etc/mysql/my.cnf to the data directories /mysql/data & /mysql/data2. $ sudo cp /etc/mysql/my.cnf /mysql/data/ $ sudo cp /etc/mysql/my.cnf /mysql/data2/ 9. Make modifications to the my.cnf in each of the data directories as required: $ sudo vi /mysql/data/my.cnf Under the [client] section port=3306 socket=/tmp/mysql.sock ---- ---- Under the [mysqld] section port=3306 socket=/tmp/mysql.sock datadir=/mysql/data ----- ----- server-id=1 $ sudo vi /mysql/data2/my.cnf Under the [client] section port=3307 socket=/tmp/mysql2.sock ----- ----- Under the [mysqld] section port=3307 socket=/tmp/mysql2.sock datadir=/mysql/data2 ----- ----- server-id=2 10. Make appropriate modification to the startup script of MySQL (managed by SMF) to point to the appropriate my.cnf for each instance: $ sudo vi /lib/svc/method/mysql_51 Note: Search for all occurences of mysqld_safe command and modify it to include the --defaults-file option. An example entry would look as follows: ${MySQLBIN}/mysqld_safe --defaults-file=${MYSQLDATA}/my.cnf --user=mysql --datadir=${MYSQLDATA} --pid=file=${PIDFILE} 11. Start the service: $ sudo svcadm enable mysql:version_51_2 $ sudo svcadm enable mysql:version_51 12. Verify that the two services are running by using: $ svcs mysql 13. Verify the processes: $ ps -ef | grep mysqld 14. Connect to each mysqld instance and verify: $ mysql --defaults-file=/mysql/data/my.cnf -u root -p $ mysql --defaults-file=/mysql/data2/my.cnf -u root -p Some references for Solaris 11 newbies Taking your first steps with Solaris 11 Introducing the basics of Image Packaging System Service Management Facility How To Guide For a detailed list of official educational modules available on Solaris 11, please visit here For MySQL courses from Oracle University access this page.

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >