Search Results

Search found 17913 results on 717 pages for 'old school rules'.

Page 462/717 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • Getting JAX-WS client work on Weblogic 9.2 with ant

    - by michuk
    I've recently had lots of issues trying to deploy a JAX-WS web servcie client on Weblogic 9.2. It turns out there is no straightforward guide on how to achieve this, so I decided to put together this short wiki entry hoping it might be useful for others. Firstly, Weblogic 9.2 does not support web servcies using JAX-WS in general. It comes with old versions of XML-related java libraries that are incompatible with the latest JAX-WS (similar issues occur with Axis2, only Axis1 seems to be working flawlessly with Weblogic 9.x but that's a very old and unsupported library). So, in order to get it working, some hacking is required. This is how I did it (note that we're using ant in our legacy corporate project, you probably should be using maven which should eliminate 50% of those steps below): Download the most recent JAX-WS distribution from https://jax-ws.dev.java.net/ (The exact version I got was JAXWS2.2-20091203.zip) Place the JAX-WS jars with the dependencies in a separate folder like lib/webservices. Create a patternset in ant to reference those jars: Include the patternset in your WAR-related goal. This could look something like: (not the flatten="true" parameter - it's important as Weblogic 9.x is by default not smart enough to access jars located in a different lcoation than WEB-INF/lib inside your WAR file) In case of clashes, Weblogic uses its own jars by default. We want it to use the JAX-WS jars from our application instead. This is achieved by preparing a weblogic-application.xml file and placing it in META-INF folder of the deplotyed EAR file. It should look like this: javax.jws. javax.xml.bind. javax.xml.crypto. javax.xml.registry. javax.xml.rpc. javax.xml.soap. javax.xml.stream. javax.xml.ws. com.sun.xml.api.streaming.* Remember to place that weblogic-application.xml file in your EAR! The ant goal for that may look similar to: <jar destfile="${warfile}" basedir="${wardir}"/> <ear destfile="${earfile}" appxml="resources/${app.name}/application.xml"> <fileset dir="${dist}" includes="${app.name}.war"/> <metainf dir="resources/META-INF"/> </ear> Also you need to tell weblogic to prefer your WEB-INF classes to those in distribution. You do that by placing the following lines in your WEB-INF/weblogic.xml file: true And that's it for the weblogic-related configuration. Now only set up your JAX-WS goal. The one below is going to simply generate the web service stubs and classes based on a locally deployed WSDL file and place them in a folder in your app: Remember about the keep="true" parameter. Without it, wsimport generates the classes and... deletes them, believe it or not! For mocking a web service I suggest using SOAPUI, an open source project. Very easy to deploy, crucial for web servcies intergation testing. We're almost there. The final thing is to write a Java class for testing the web service, try to run it as a standalone app first (or as part of your unit tests) And then try to run the same code from withing Weblogic. It should work. It worked for me. After some 3 days of frustration. And yes, I know I should've put 9 and 10 under a single bullet-point, but the title "10 steps to deploy a JAX-WS web service under Web logic 9.2 using ant" sounds just so much better. Please, edit this post and improve it if you find something missing!

    Read the article

  • Creating a MiniDump of a running process

    - by Lodle
    Im trying to make a tool for my end users that can create a MiniDump of my application if it hangs (i.e. external to the app). Im using the same code as the internal MiniDumper but with the handle and processid of the app but i keep getting error code 0xD0000024 when calling MiniDumpWriteDump. Any ideas? void produceDump( const char* exe ) { DWORD processId = 0; HANDLE process = findProcess(exe, processId); if (!process || processId == 0) { printf("Unable to find exe %s to produce dump.\n", exe); return; } LONG retval = EXCEPTION_CONTINUE_SEARCH; HWND hParent = NULL; // find a better value for your app // firstly see if dbghelp.dll is around and has the function we need // look next to the EXE first, as the one in System32 might be old // (e.g. Windows 2000) HMODULE hDll = NULL; char szDbgHelpPath[_MAX_PATH]; if (GetModuleFileName( NULL, szDbgHelpPath, _MAX_PATH )) { char *pSlash = _tcsrchr( szDbgHelpPath, '\\' ); if (pSlash) { _tcscpy( pSlash+1, "DBGHELP.DLL" ); hDll = ::LoadLibrary( szDbgHelpPath ); } } if (hDll==NULL) { // load any version we can hDll = ::LoadLibrary( "DBGHELP.DLL" ); } LPCTSTR szResult = NULL; int err = 0; if (hDll) { MINIDUMPWRITEDUMP pDump = (MINIDUMPWRITEDUMP)::GetProcAddress( hDll, "MiniDumpWriteDump" ); if (pDump) { char szDumpPath[_MAX_PATH]; char szScratch [_MAX_PATH]; time_t rawtime; struct tm * timeinfo; time ( &rawtime ); timeinfo = localtime ( &rawtime ); char comAppPath[MAX_PATH]; SHGetFolderPath(NULL, CSIDL_COMMON_APPDATA , NULL, SHGFP_TYPE_CURRENT, comAppPath ); //COMMONAPP_PATH _snprintf(szDumpPath, _MAX_PATH, "%s\\DN", comAppPath); CreateDirectory(szDumpPath, NULL); _snprintf(szDumpPath, _MAX_PATH, "%s\\DN\\D", comAppPath); CreateDirectory(szDumpPath, NULL); _snprintf(szDumpPath, _MAX_PATH, "%s\\DN\\D\\dumps", comAppPath); CreateDirectory(szDumpPath, NULL); char fileName[_MAX_PATH]; _snprintf(fileName, _MAX_PATH, "%s_Dump_%04d%02d%02d_%02d%02d%02d.dmp", exe, timeinfo->tm_year+1900, timeinfo->tm_mon, timeinfo->tm_mday, timeinfo->tm_hour, timeinfo->tm_min, timeinfo->tm_sec ); _snprintf(szDumpPath, _MAX_PATH, "%s\\DN\\D\\dumps\\%s", comAppPath, fileName); // create the file HANDLE hFile = ::CreateFile( szDumpPath, GENERIC_WRITE, FILE_SHARE_WRITE, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL ); if (hFile!=INVALID_HANDLE_VALUE) { MINIDUMP_CALLBACK_INFORMATION mci; mci.CallbackRoutine = (MINIDUMP_CALLBACK_ROUTINE)MyMiniDumpCallback; mci.CallbackParam = 0; MINIDUMP_TYPE mdt = (MINIDUMP_TYPE)(MiniDumpWithPrivateReadWriteMemory | MiniDumpWithDataSegs | MiniDumpWithHandleData | //MiniDumpWithFullMemoryInfo | //MiniDumpWithThreadInfo | MiniDumpWithProcessThreadData | MiniDumpWithUnloadedModules ); // write the dump BOOL bOK = pDump( process, processId, hFile, mdt, NULL, NULL, &mci ); DWORD lastErr = GetLastError(); if (bOK) { printf("Crash dump saved to: %s\n", szDumpPath); return; } else { _snprintf( szScratch, _MAX_PATH, "Failed to save dump file to '%s' (error %u)", szDumpPath, lastErr); szResult = szScratch; err = ERR_CANTSAVEFILE; } ::CloseHandle(hFile); } else { _snprintf( szScratch, _MAX_PATH, "Failed to create dump file '%s' (error %u)", szDumpPath, GetLastError()); szResult = szScratch; err = ERR_CANTMAKEFILE; } } else { szResult = "DBGHELP.DLL too old"; err = ERR_DBGHELP_TOOLD; } } else { szResult = "DBGHELP.DLL not found"; err = ERR_DBGHELP_NOTFOUND; } printf("Could not produce a crash dump of %s.\n\n[error: %u %s].\n", exe, err, szResult); return; } this code works 100% when its internal to the process (i.e. with SetUnhandledExceptionFilter)

    Read the article

  • asp.net detailsview update method not getting new values

    - by Ali
    Hi all, I am binding a detailsview with objectdatasource which gets the select parameter from the querystring. The detailsview shows the desired record, but when I try to update it, my update method gets the old values for the record (and hence no update). here is my detailsview code: <asp:DetailsView ID="dvUsers" runat="server" Height="50px" Width="125px" AutoGenerateRows="False" DataSourceID="odsUserDetails" onitemupdating="dvUsers_ItemUpdating"> <Fields> <asp:CommandField ShowEditButton="True" /> <asp:BoundField DataField="Username" HeaderText="Username" SortExpression="Username" ReadOnly="true" /> <asp:BoundField DataField="FirstName" HeaderText="First Name" SortExpression="FirstName" /> <asp:BoundField DataField="LastName" HeaderText="Last Name" SortExpression="LastName" /> <asp:BoundField DataField="Email" runat="server" HeaderText="Email" SortExpression="Email" /> <asp:BoundField DataField="IsActive" HeaderText="Is Active" SortExpression="IsActive" /> <asp:BoundField DataField="IsOnline" HeaderText="Is Online" SortExpression="IsOnline" ReadOnly="true" /> <asp:BoundField DataField="LastLoginDate" HeaderText="Last Login" SortExpression="LastLoginDate" ReadOnly="true" /> <asp:BoundField DataField="CreateDate" HeaderText="Member Since" SortExpression="CreateDate" ReadOnly="true" /> <asp:TemplateField HeaderText="Membership Ends" SortExpression="ExpiryDate"> <EditItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("ExpiryDate") %>'></asp:TextBox> <cc1:CalendarExtender ID="TextBox1_CalendarExtender" runat="server" Enabled="True" TargetControlID="TextBox1"> </cc1:CalendarExtender> </EditItemTemplate> <InsertItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Bind("ExpiryDate") %>'></asp:TextBox> </InsertItemTemplate> <ItemTemplate> <asp:Label ID="Label1" runat="server" Text='<%# Bind("ExpiryDate") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> </Fields> and here is the objectdatasource code: <asp:ObjectDataSource ID="odsUserDetails" runat="server" SelectMethod="GetAllUserDetailsByUserId" TypeName="QMS_BLL.Membership" UpdateMethod="UpdateUserForClient"> <UpdateParameters> <asp:Parameter Name="User_ID" Type="Int32" /> <asp:Parameter Name="firstName" Type="String" /> <asp:Parameter Name="lastName" Type="String" /> <asp:SessionParameter Name="updatedByUser" SessionField="userId" DefaultValue="1" /> <asp:Parameter Name="expiryDate" Type="DateTime" /> <asp:Parameter Name="Email" Type="String" /> <asp:Parameter Name="isActive" Type="String" /> </UpdateParameters> <SelectParameters> <asp:QueryStringParameter DefaultValue="1" Name="User_ID" QueryStringField="User_ID" Type="Int32" /> </SelectParameters> </asp:ObjectDataSource> Is the OnItemUpdating method still required when you have your custom BLL method called on insertevent? (which is being executed fine in my case but updating with the old values) or am I missing something else? Also I tried to provide an OnItemUpdating method and in there I tried to capture the contents of the textboxes (the new values). I got an exception: "Specified argument was out of the range of valid values. Parameter name: index" when I tried to do: TextBox txtFirstName = (TextBox)dvUsers.Rows[1].Cells[1].Controls[0]; Any help will be most appreciated.

    Read the article

  • The HTML5 doctype is not triggering standards mode in IE8

    - by El Guapo
    i work for a company where all our sites currently use the XHTML 1.0 transitional doctype (yes i know it is very old school). I want to change them all to use the HTML5 doctype seeing as it is backwards compatible. One of the reasons why i want to make the switch is because in IE8 if someone has the developer tools installed then the old XHTML doctype switches the browser into compatibility mode and renders the page as IE7. From reading up on it i was led to believe that the HTML5 doctype will set any page to render in standards mode, but this is not happening when i test it on our staging server it still flips into IE7 rendering mode. The weird thing is if i save the page with HTML5 doctype locally and open it, it is rendering in IE8 standards mode. There must be something else causing it to drop into compatibility IE7 rendering. Any ideas what this could be? Below is the head of the test page i have been looking at: <!DOCTYPE html > <html xmlns="http://www.w3.org/1999/xhtml" xmlns:og="http://opengraphprotocol.org/schema/" xmlns:fb="http://www.facebook.com/2008/fbml"> <head> <title>Burton - Mens Clothing - Mens Fashion - Burton Menswear</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta name="description" content="Burton is one of the UK's leading men's clothing &amp; fashion retailers, with a range of men's clothing designed to make you look &amp; feel good. Find formal &amp; casual clothes &amp; accessories for men online at Burton menswear"/> <meta name="keywords" content="menswear, clothes for men, clothing for men, men clothes, men's fashion, men's wear, men's clothing online, men's clothes online, men's clothes shop, burton men's, burton menswear, burton uk, burton"/> <script type="text/javascript">document.getElementsByTagName('html')[0].className = 'js';</script> <link rel="stylesheet" type="text/css" href="http://eu.burton-menswear.com/wcsstore/ConsumerDirectStorefrontAssetStore/images/colors/color2/v3/css/screen.css" /> <link rel="stylesheet" type="text/css" href="http://eu.burton-menswear.com/wcsstore/ConsumerDirectStorefrontAssetStore/images/colors/color2/v3/css/print.css" media="print"/> <link rel="stylesheet" type="text/css" href="http://eu.burton-menswear.com/wcsstore/ConsumerDirectStorefrontAssetStore/images/colors/color2/v3/css/brand.css" /> <!--[if lt IE 8]> <link rel="stylesheet" href="http://eu.burton-menswear.com/wcsstore/ConsumerDirectStorefrontAssetStore/images/colors/color2/v3/css/ie.css" type="text/css" media="screen, projection"> <![endif]--> <meta http-equiv="content-language" content="en-gb" /> <link rel="shortcut icon" type="image/x-icon" href="http://eu.burton-menswear.com/favicon.ico" /> <link rel="search" type="application/opensearchdescription+xml" title="burton.co.uk Search" href="http://eu.burton-menswear.com/burton-search.xml"/> <!-- Start Summit Tag --> <script type="text/javascript"> var __stormJs = "t1.stormiq.com/dcv4/jslib/3286_D92B7532_4A18_46A8_864A_5FDF1DF25844.js"; </script> <script type="text/javascript" src="http://eu.burton-menswear.com/javascript/track.js"></script> <!-- End Summit Tag --> <!-- Start QuBit Tag --> <script src=//d3c3cq33003psk.cloudfront.net/opentag-31935-42109.js async defer></script> <!-- End QuBit Tag --> <link type="text/css" rel="stylesheet" href="http://reviews.br.wcstage.arcadiagroup.ltd.uk/bvstaging/static/6028-en_gb/bazaarvoice.css" ></link> </head>

    Read the article

  • GKTank example is not working.

    - by david
    Hello, I'm trying to get the GKTank example working with 2 iPhones. Both have bluetooth enabled. I start the app on both devices and tap the screen. The Peer Picker comes up and the devices find each other. If I select one device in the list it says "Waiting for {other iPhone}..." forever. On the {other iPhone} the waiting phone gets grayed out. If I select the device to connect to from both devices at the same time both go into waiting state forever... The debug log says this if I select the other iPhone on the debugged device: 2010-05-30 23:20:24.331 GKTank[2433:4e03] handleEvents started (2) 2010-05-30 23:20:25.269 GKTank[2433:4e03] ~ DNSServiceRegister callback: Ref=135f70, Flags=2, ErrorType=0 name=00oRWv-0A..David’s iPhone regtype=_gktank._udp. domain=local. 2010-05-30 23:20:25.375 GKTank[2433:4e03] ~ DNSServiceBrowse callback: Ref=134f30, Flags=2, IFIndex=8 (name=[en2]), ErrorType=0 name=00oRWv-0A..David’s iPhone regtype=_gktank._udp. domain=local. 2010-05-30 23:20:30.691 GKTank[2433:4e03] ~ DNSServiceBrowse callback: Ref=134f30, Flags=2, IFIndex=-3 (name=[]), ErrorType=0 name=00K83eS0A..iPhone von Tamara regtype=_gktank._udp. domain=local. 2010-05-30 23:20:30.855 GKTank[2433:4e03] ~ DNSServiceQueryRecord callback: Ref=13a320, Flags=2, IFIndex=-3 (name=[]), ErrorType=0 fullname=00k83es0a..iphone\032von\032tamara._gktank._udp.local. rrtype=16 rrclass=1 rdlen=18 ttl=7200 2010-05-30 23:20:30.872 GKTank[2433:4e03] ** peer 480260628: oldbusy=0, newbusy=0 2010-05-30 23:20:35.215 GKTank[2433:207] ** Stop resolving? potentially previous resolves 2010-05-30 23:20:35.226 GKTank[2433:207] **** BEGIN RESOLVE: 480260628 and it stays that way. On the second iPhone the device is listed as not available and grayed out. If I select each other at the same time it says this: 2010-05-30 23:24:31.416 GKTank[2442:4e03] handleEvents started (2) 2010-05-30 23:24:32.321 GKTank[2442:4e03] ~ DNSServiceRegister callback: Ref=135120, Flags=2, ErrorType=0 name=006JiAZ0A..David’s iPhone regtype=_gktank._udp. domain=local. 2010-05-30 23:24:32.419 GKTank[2442:4e03] ~ DNSServiceBrowse callback: Ref=134f30, Flags=2, IFIndex=8 (name=[en2]), ErrorType=0 name=006JiAZ0A..David’s iPhone regtype=_gktank._udp. domain=local. 2010-05-30 23:24:57.156 GKTank[2442:4e03] ~ DNSServiceBrowse callback: Ref=134f30, Flags=2, IFIndex=-3 (name=[]), ErrorType=0 name=004_n6C0A..iPhone von Tamara regtype=_gktank._udp. domain=local. 2010-05-30 23:24:57.308 GKTank[2442:4e03] ~ DNSServiceQueryRecord callback: Ref=13a320, Flags=2, IFIndex=-3 (name=[]), ErrorType=0 fullname=004_n6c0a..iphone\032von\032tamara._gktank._udp.local. rrtype=16 rrclass=1 rdlen=18 ttl=7200 2010-05-30 23:24:57.314 GKTank[2442:4e03] ** peer 203104196: oldbusy=0, newbusy=0 2010-05-30 23:25:02.383 GKTank[2442:207] ** Stop resolving? potentially previous resolves 2010-05-30 23:25:02.425 GKTank[2442:207] **** BEGIN RESOLVE: 203104196 2010-05-30 23:25:13.562 GKTank[2442:4e03] ~ DNSServiceQueryRecord callback: Ref=13a320, Flags=2, IFIndex=-3 (name=[]), ErrorType=0 fullname=004_n6c0a..iphone\032von\032tamara._gktank._udp.local. rrtype=16 rrclass=1 rdlen=18 ttl=7200 2010-05-30 23:25:13.569 GKTank[2442:4e03] ** peer 203104196: oldbusy=0, newbusy=1 2010-05-30 23:25:33.660 GKTank[2442:4e03] ~ DNSServiceBrowse callback: Ref=134f30, Flags=0, IFIndex=-3 (name=[]), ErrorType=0 name=004_n6C0A..iPhone von Tamara regtype=_gktank._udp. domain=local. 2010-05-30 23:25:33.671 GKTank[2442:4e03] Peer [203104196] removed? (0). 2010-05-30 23:25:33.683 GKTank[2442:4e03] GKPeer[139f10] 203104196 service count old=1 new=0 2010-05-30 23:25:37.786 GKTank[2442:4e03] ~ DNSServiceBrowse callback: Ref=134f30, Flags=2, IFIndex=-3 (name=[]), ErrorType=0 name=004_n6C0A..iPhone von Tamara regtype=_gktank._udp. domain=local. 2010-05-30 23:25:37.816 GKTank[2442:4e03] GKPeer[139f10] 203104196 service count old=0 new=1 ... and waits forever. Does anybody know whats wrong with this sample??

    Read the article

  • How to write specs with MSpec for code that changes Thread.CurrentPrincipal?

    - by Dan Jensen
    I've been converting some old specs to MSpec (were using NUnit/SpecUnit). The specs are for a view model, and the view model in question does some custom security checking. We have a helper method in our specs which will setup fake security credentials for the Thread.CurrentPrincipal. This worked fine in the old unit tests, but fails in MSpec. Specifically, I'm getting this exception: "System.Runtime.Serialization.SerializationException: Type is not resolved for member" It happens when part of the SUT tries to read the app config file. If I comment out the line which sets the CurrentPrincipal (or simply call it after the part that checks the config file), the error goes away, but the tests fail due to lack of credentials. Similarly, if I set the CurrentPrincipal to null, the error goes away, but again the tests fail because the credentials aren't set. I've googled this, and found some posts about making sure the custom principal is serializable when it crosses AppDomain boundaries (usually in reference to web apps). In our case, this is not a web app, and I'm not crossing any AppDomains. Our pincipal object is also serializable. I downloaded the source for MSpec, and found that the ConsoleRunner calls a class named AppDomainRunner. I haven't debugged into it, but it looks like it's running the specs in different app domains. So does anyone have any ideas on how I can overcome this? I really like MSpec, and would love to use it exclusively. But I need to be able to supply fake security credentials while running the tests. Thanks! Update: here's the spec class: [Subject(typeof(CountryPickerViewModel))] public class When_the_user_makes_a_selection : PickerViewModelSpecsBase { protected static CountryPickerViewModel picker; Establish context = () => { SetupFakeSecurityCredentials(); CreateFactoryStubs(); StubLookupServicer<ICountryLookupServicer>() .WithData(BuildActiveItems(new [] { "USA", "UK" })); picker = new CountryPickerViewModel(ViewFactory, ViewModelFactory, BusinessLogicFactory, CacheFactory); }; Because of = () => picker.SelectedItem = picker.Items[0]; Behaves_like<Picker_that_has_a_selected_item> a_picker_with_a_selection; } We have a number of these "picker" view models, all of which exhibit some common behavior. So I'm using the Behaviors feature of MSpec. This particular class is simulating the user selecting something from the (WPF) control which is bound to this VM. The SetupFakeSecurityCredentials() method is simply setting Thread.CurrentPrincipal to an instance of our custom principal, where the prinipal has been populated will full-access rights. Here's a fake CountryPickerViewModel which is enough to cause the error: public class CountryPickerViewModel { public CountryPickerViewModel(IViewFactory viewFactory, IViewModelFactory viewModelFactory, ICoreBusinessLogicFactory businessLogicFactory, ICacheFactory cacheFactory) { Items = new Collection<int>(); var validator = ValidationFactory.CreateValidator<object>(); } public int SelectedItem { get; set; } public Collection<int> Items { get; private set; } } It's the ValidationFactory call which blows up. ValidationFactory is an Enterprise Library object, which tries to access the config.

    Read the article

  • iphone image is leaking, but where?

    - by Brodie4598
    the image that is being displayed in this code is leaking but I cant figure out how. What I have a tableview that displays images to be displayed. Each time a user selects an image, it should remove the old image, download a new one, then add it to the scroll view. But the old image is not being released and I cant figure out why... -(void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [imageView removeFromSuperview]; self.imageView = nil; NSUInteger row = [indexPath row]; NSString *tempC = [[NSString alloc]initWithFormat:@"http://www.website.com/%@_0001.jpg",[pdfNamesFinalArray objectAtIndex:row] ]; chartFileName = tempC; pdfName = [pdfNamesFinalArray objectAtIndex:row]; [tableView deselectRowAtIndexPath:indexPath animated:YES]; NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *docsPath = [paths objectAtIndex:0]; NSString *tempString = [[[NSString alloc]initWithFormat:@"%@/%@.jpg",docsPath,pdfName]autorelease]; NSData *data = [NSData dataWithContentsOfFile:tempString]; if (data != NULL){ self.imageView = nil; [imageView removeFromSuperview]; self.imageView = nil; UIImageView *tempImage = [[[UIImageView alloc]initWithImage:[UIImage imageWithData:data]]autorelease]; self.imageView = tempImage; [data release]; scrollView.contentSize = CGSizeMake(imageView.frame.size.width , imageView.frame.size.height); scrollView.maximumZoomScale = 1; scrollView.minimumZoomScale = .6; scrollView.clipsToBounds = YES; scrollView.delegate = self; [scrollView addSubview:imageView]; scrollView.zoomScale = .37; } else { [data release]; self.imageView = nil; [imageView removeFromSuperview]; self.imageView = nil; activityIndicator.hidden = NO; getChartsButton.enabled = NO; chartListButton.enabled = NO; saveChartButton.enabled = NO; [NSThread detachNewThreadSelector:@selector(downloadImages) toTarget:self withObject:nil]; } chartPanel.hidden = YES; } -(void) downloadImages { NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init]; self.imageView = nil; [imageView removeFromSuperview]; NSURL *url = [[[NSURL alloc]initWithString:chartFileName]autorelease]; NSData *data = [NSData dataWithContentsOfURL:url]; UIImageView *tempImage = [[[UIImageView alloc]initWithImage:[UIImage imageWithData:data]]autorelease]; self.imageView = tempImage; tempImage = nil; scrollView.contentSize = CGSizeMake(imageView.frame.size.width , imageView.frame.size.height); scrollView.maximumZoomScale = 1; scrollView.minimumZoomScale = .37; scrollView.clipsToBounds = YES; scrollView.delegate = self; [scrollView addSubview:imageView]; scrollView.zoomScale = .6; activityIndicator.hidden = YES; getChartsButton.enabled = YES; chartListButton.enabled = YES; saveChartButton.enabled = YES; [pool drain]; [pool release]; }

    Read the article

  • jQuery line 67 saying "TypeError: 'undefined' is not a function."

    - by dfdf
    var dbShell; function doLog(s){ /* setTimeout(function(){ console.log(s); }, 3000); */ } function dbErrorHandler(err){ alert("DB Error: "+err.message + "\nCode="+err.code); } function phoneReady(){ doLog("phoneReady"); //First, open our db dbShell = window.openDatabase("SimpleNotes", 2, "SimpleNotes", 1000000); doLog("db was opened"); //run transaction to create initial tables dbShell.transaction(setupTable,dbErrorHandler,getEntries); doLog("ran setup"); } //I just create our initial table - all one of em function setupTable(tx){ doLog("before execute sql..."); tx.executeSql("CREATE TABLE IF NOT EXISTS notes(id INTEGER PRIMARY KEY,title,body,updated)"); doLog("after execute sql..."); } //I handle getting entries from the db function getEntries() { //doLog("get entries"); dbShell.transaction(function(tx) { tx.executeSql("select id, title, body, updated from notes order by updated desc",[],renderEntries,dbErrorHandler); }, dbErrorHandler); } function renderEntries(tx,results){ doLog("render entries"); if (results.rows.length == 0) { $("#mainContent").html("<p>You currently do not have any notes.</p>"); } else { var s = ""; for(var i=0; i<results.rows.length; i++) { s += "<li><a href='edit.html?id="+results.rows.item(i).id + "'>" + results.rows.item(i).title + "</a></li>"; } $("#noteTitleList").html(s); $("#noteTitleList").listview("refresh"); } } function saveNote(note, cb) { //Sometimes you may want to jot down something quickly.... if(note.title == "") note.title = "[No Title]"; dbShell.transaction(function(tx) { if(note.id == "") tx.executeSql("insert into notes(title,body,updated) values(?,?,?)",[note.title,note.body, new Date()]); else tx.executeSql("update notes set title=?, body=?, updated=? where id=?",[note.title,note.body, new Date(), note.id]); }, dbErrorHandler,cb); } function init(){ document.addEventListener("deviceready", phoneReady, false); //handle form submission of a new/old note $("#editNoteForm").live("submit",function(e) { var data = {title:$("#noteTitle").val(), body:$("#noteBody").val(), id:$("#noteId").val() }; saveNote(data,function() { $.mobile.changePage("index.html",{reverse:true}); }); e.preventDefault(); }); //will run after initial show - handles regetting the list $("#homePage").live("pageshow", function() { getEntries(); }); //edit page logic needs to know to get old record (possible) $("#editPage").live("pageshow", function() { var loc = $(this).data("url"); if(loc.indexOf("?") >= 0) { var qs = loc.substr(loc.indexOf("?")+1,loc.length); var noteId = qs.split("=")[1]; //load the values $("#editFormSubmitButton").attr("disabled","disabled"); dbShell.transaction( function(tx) { tx.executeSql("select id,title,body from notes where id=?",[noteId],function(tx,results) { $("#noteId").val(results.rows.item(0).id); $("#noteTitle").val(results.rows.item(0).title); $("#noteBody").val(results.rows.item(0).body); $("#editFormSubmitButton").removeAttr("disabled"); }); }, dbErrorHandler); } else { $("#editFormSubmitButton").removeAttr("disabled"); } }); } Dats my code, awfully long, huh? Well anyways I got most of it from here, however I get an error on line 67 saying "TypeError: 'undefined' is not a function.". I'm using Steroids (phonegap-like) and testing dis on an iPhone simulator. I'm sure it uses some cordova for the database work. Thank you for your help :-)

    Read the article

  • Lock free multiple readers single writer

    - by dummzeuch
    I have got an in memory data structure that is read by multiple threads and written by only one thread. Currently I am using a critical section to make this access threadsafe. Unfortunately this has the effect of blocking readers even though only another reader is accessing it. There are two options to remedy this: use TMultiReadExclusiveWriteSynchronizer do away with any blocking by using a lock free approach For 2. I have got the following so far (any code that doesn't matter has been left out): type TDataManager = class private FAccessCount: integer; FData: TDataClass; public procedure Read(out _Some: integer; out _Data: double); procedure Write(_Some: integer; _Data: double); end; procedure TDataManager.Read(out _Some: integer; out _Data: double); var Data: TDAtaClass; begin InterlockedIncrement(FAccessCount); try // make sure we get both values from the same TDataClass instance Data := FData; // read the actual data _Some := Data.Some; _Data := Data.Data; finally InterlockedDecrement(FAccessCount); end; end; procedure TDataManager.Write(_Some: integer; _Data: double); var NewData: TDataClass; OldData: TDataClass; ReaderCount: integer; begin NewData := TDataClass.Create(_Some, _Data); InterlockedIncrement(FAccessCount); OldData := TDataClass(InterlockedExchange(integer(FData), integer(NewData)); // now FData points to the new instance but there might still be // readers that got the old one before we exchanged it. ReaderCount := InterlockedDecrement(FAccessCount); if ReaderCount = 0 then // no active readers, so we can safely free the old instance FreeAndNil(OldData) else begin /// here is the problem end; end; Unfortunately there is the small problem of getting rid of the OldData instance after it has been replaced. If no other thread is currently within the Read method (ReaderCount=0), it can safely be disposed and that's it. But what can I do if that's not the case? I could just store it until the next call and dispose it there, but Windows scheduling could in theory let a reader thread sleep while it is within the Read method and still has got a reference to OldData. If you see any other problem with the above code, please tell me about it. This is to be run on computers with multiple cores and the above methods are to be called very frequently. In case this matters: I am using Delphi 2007 with the builtin memory manager. I am aware that the memory manager probably enforces some lock anyway when creating a new class but I want to ignore that for the moment. Edit: It may not have been clear from the above: For the full lifetime of the TDataManager object there is only one thread that writes to the data, not several that might compete for write access. So this is a special case of MREW.

    Read the article

  • sql perfomance on new server

    - by Rapunzo
    My database is running on a pc (AMD Phenom x6, intel ssd disk, 8GB DDR3 RAM and windows 7 OS + sql server 2008 R2 sp3 ) and it started working hard, timeout problems and up to 30 seconds long queries after 200 mb of database And I also have an old server pc (IBM x-series 266: 72*3 15k rpm scsi discs with raid5, 4 gb ram and windows server 2003 + sql server 2008 R2 sp3 ) and same query start to give results in 100 seconds.. I tried query analyser tool for tuning my indexed. but not so much improvements. its a big dissapointment for me. because I thought even its an old server pc it should be more powerfull with 15k rpm discs with raid5. what should I do. do I need $10.000 new server to get a good performance for my sql server? cant I use that IBM server? Extra information: there is 50 sql users and its an ERP program. There is my query ALTER FUNCTION [dbo].[fnDispoTerbiye] ( ) RETURNS TABLE AS RETURN ( SELECT MD.dispoNo, SV.sevkNo, M1.musteriAdi AS musteri, SD.tipTurId, TT.tipTur, SD.tipNo, SD.desenNo, SD.varyantNo, SUM(T.topMetre) AS toplamSevkMetre, MD.dispoMetresi, DT.gelisMetresi, ISNULL(DT.fire, 0) AS fire, SV.sevkTarihi, DT.gelisTarihi, SP.mamulTermin, SD.miktar AS siparisMiktari, M.musteriAdi AS boyahane, MD.akisNotu AS islemler, --dbo.fnAkisIslemleri(MD.dispoNo) DT.partiNo, DT.iplikBoyaId, B.tanimAd AS BoyaTuru, MAX(HD.hamEn) AS hamEn, MAX(HD.hamGramaj) AS hamGramaj, TS.mamulEn, TS.mamulGramaj, DT.atkiCekmesi, DT.cozguCekmesi, DT.fiyat, DV.dovizCins, DT.dovizId, (SELECT CASE WHEN DT.dovizId = 2 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 2 ORDER BY tarih DESC), 2) AS numeric(18, 2)) WHEN DT.dovizId = 3 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 3 ORDER BY tarih DESC), 2) AS numeric(18, 2)) WHEN DT.dovizId = 1 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 1 ORDER BY tarih DESC), 2) AS numeric(18, 2)) END AS Expr1) AS ToplamTLfiyat, DT.aciklama, MD.dispoNotu, SD.siparisId, SD.siparisDetayId, DT.sqlUserName, DT.kayitTarihi, O.orguAd, 'Çözgü=(' + (SELECT dbo.fnTipIplikler(SD.tipTurId, SD.tipNo, SD.desenNo, SD.varyantNo, 1) AS Expr1) + ')' + ' Atki=(' + (SELECT dbo.fnTipIplikler(SD.tipTurId, SD.tipNo, SD.desenNo, SD.varyantNo, 2) AS Expr1) + ')' AS iplikAciklama, DT.prosesOk, dbo.[fnYikamaTalimat](SP.siparisId) yikamaTalimati FROM tblDoviz AS DV WITH(NOLOCK) INNER JOIN tblDispoTerbiye AS DT WITH(NOLOCK) INNER JOIN tblTanimlar AS B WITH(NOLOCK) ON DT.iplikBoyaId = B.tanimId AND B.tanimTurId = 2 ON DV.id = DT.dovizId RIGHT OUTER JOIN tblMusteri AS M1 WITH(NOLOCK) INNER JOIN tblSiparisDetay AS SD WITH(NOLOCK) INNER JOIN tblDispo AS MD WITH(NOLOCK) ON SD.siparisDetayId = MD.siparisDetayId INNER JOIN tblTipTur AS TT WITH(NOLOCK) ON SD.tipTurId = TT.tipTurId INNER JOIN tblSiparis AS SP WITH(NOLOCK) ON SD.siparisId = SP.siparisId ON M1.musteriNo = SP.musteriNo INNER JOIN tblTip AS TP WITH(NOLOCK) ON SD.tipTurId = TP.tipTurId AND SD.tipNo = TP.tipNo AND SD.desenNo = TP.desen AND SD.varyantNo = TP.varyant INNER JOIN tblOrgu AS O WITH(NOLOCK) ON TP.orguId = O.orguId INNER JOIN tblMusteri AS M WITH(NOLOCK) INNER JOIN tblSevkiyat AS SV WITH(NOLOCK) ON M.musteriNo = SV.musteriNo INNER JOIN tblSevkDetay AS SVD WITH(NOLOCK) ON SV.sevkNo = SVD.sevkNo ON MD.mamulDispoHamSevkno = SV.sevkNo LEFT OUTER JOIN tblTop AS T WITH(NOLOCK) INNER JOIN tblDispo AS HD WITH(NOLOCK) ON T.dispoNo = HD.dispoNo AND T.dispoTuruId = HD.dispoTuruId ON SVD.dispoTuruId = T.dispoTuruId AND SVD.dispoNo = T.dispoNo AND SVD.topNo = T.topNo AND MD.siparisDetayId = HD.siparisDetayId ON DT.dispoTuruId = MD.dispoTuruId AND DT.dispoNo = MD.dispoNo LEFT OUTER JOIN tblDispoTerbiyeTest AS TS WITH(NOLOCK) ON DT.dispoTuruId = TS.dispoTuruId AND DT.dispoNo = TS.dispoNo --WHERE DT.gelisTarihi IS NULL -- OR DT.gelisTarihi > GETDATE()-30 GROUP BY MD.dispoNo, DT.partiNo, DT.iplikBoyaId, TS.mamulEn, TS.mamulGramaj, DT.gelisMetresi, DT.gelisTarihi, DT.atkiCekmesi, DT.cozguCekmesi, DT.fire, DT.fiyat, DT.aciklama, DT.sqlUserName, DT.kayitTarihi, SD.tipTurId, TT.tipTur, SD.tipNo, SD.desenNo, SD.varyantNo, SD.siparisId, SD.siparisDetayId, B.tanimAd, M.musteriAdi, M.musteriAdi, M1.musteriAdi, O.orguAd, TP.iplikAciklama, SD.miktar, MD.dispoNotu, SP.mamulTermin, DT.dovizId, DV.dovizCins, MD.dispoMetresi, MD.akisNotu, SV.sevkNo, SV.sevkTarihi, DT.prosesOk,SP.siparisId )

    Read the article

  • using getScript to import plugin on page using multiple versions of jQuery

    - by mikez302
    I am developing an app on a page that uses jQuery 1.2.6, but I would like to use jQuery 1.4.2 for my app. I really don't like to use multiple versions of jQuery like this but the copy on the page (1.2.6) is something I have no control over. I decided to isolate my code like this: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html><head> <script type="text/javascript" src="jquery-1.2.6.min.js> <script type="text/javascript" src="pageStuff.js"> </head> <body> Welcome to our page. <div id="app"> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.js"></script> <script type="text/javascript" src="myStuff.js"> </div> </body></html> The file myStuff.js has my own code that is supposed to use jQuery 1.4.2, and it looks like this: (function($) { //wrap everything in function to add ability to use $ var with noConflict var jQuery = $; //my code })(jQuery.noConflict(true)); This is an extremely simplified version, but I hope you get the idea of what I did. For a while, everything worked fine. However, I decided to want to use a jQuery plugin in a separate file. I tested it and it acted funny. After some experimentation, I found out that the plugin was using the old version of jQuery, when I wanted it to use the new version. Does anyone know how to import and run a js file from the context within the function wrapping the code in myStuff.js? In case this matters to anyone, here is how I know the plugin is using the old version, and what I did to try to solve the problem: I made a file called test.js, consisting of this line: alert($.fn.jquery); I tried referencing the file in a script tag the way external Javascript is usually included, below myStuff.js, and it came up as 1.2.6, like I expected. I then got rid of that script tag and put this line in myStuff.js: $.getScript("test.js"); and it still came back as 1.2.6. That wasn't a big surprise -- according to jQuery's documentation, scripts included that way are executed in the global context. I then tried doing this instead: var testFn = $.proxy($.getScript, this); testFn("test.js"); and it still came back as 1.2.6. After some tinkering, I found out that the "this" keyword referred to the window, which I assume means the global context. I am looking for something to put in place of "this" to refer to the context of the enclosing function, or some other way to make the code in the file run from the enclosing function. I noticed that if I copy and paste the code, it works fine, but it is a big plugin that is used in many places, and I would prefer not to clutter up my file with their code. I am out of ideas. Does anyone else know how to do this?

    Read the article

  • Generating cache file for Twitter rss feed

    - by Kerri
    I'm working on a site with a simple php-generated twitter box with user timeline tweets pulled from the user_timeline rss feed, and cached to a local file to cut down on loads, and as backup for when twitter goes down. I based the caching on this: http://snipplr.com/view/8156/twitter-cache/. It all seemed to be working well yesterday, but today I discovered the cache file was blank. Deleting it then loading again generated a fresh file. The code I'm using is below. I've edited it to try to get it to work with what I was already using to display the feed and probably messed something crucial up. The changes I made are the following (and I strongly believe that any of these could be the cause): - Revised the time difference code (the linked example seemed to use a custom function that wasn't included in the code) Removed the "serialize" function from the "fwrites". This is purely because I couldn't figure out how to unserialize once I loaded it in the display code. I truthfully don't understand the role that serialize plays or how it works, so I'm sure I should have kept it in. If that's the case I just need to understand where/how to deserialize in the second part of the code so that it can be parsed. Removed the $rss variable in favor of just loading up the cache file in my original tweet display code. So, here are the relevant parts of the code I used: <?php $feedURL = "http://twitter.com/statuses/user_timeline/#######.rss"; // START CACHING $cache_file = dirname(__FILE__).'/cache/twitter_cache.rss'; // Start with the cache if(file_exists($cache_file)){ $mtime = (strtotime("now") - filemtime($cache_file)); if($mtime > 600) { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } echo "<!-- twitter cache generated ".date('Y-m-d h:i:s', filemtime($cache_file))." -->"; } else { $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/#######.rss'); $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } //END CACHING //START DISPLAY $doc = new DOMDocument(); $doc->load($cache_file); $arrFeeds = array(); foreach ($doc->getElementsByTagName('item') as $node) { $itemRSS = array ( 'title' => $node->getElementsByTagName('title')->item(0)->nodeValue, 'date' => $node->getElementsByTagName('pubDate')->item(0)->nodeValue ); array_push($arrFeeds, $itemRSS); } // the rest of the formatting and display code.... } ?> ETA 6/17 Nobody can help…? I'm thinking it has something to do with writing a blank cache file over a good one when twitter is down, because otherwise I imagine that this should be happening every ten minutes when the cache file is overwritten again, but it doesn't happen that frequently. I made the following change to the part where it checks how old the file is to overwrite it: $cache_rss = file_get_contents('http://twitter.com/statuses/user_timeline/75168146.rss'); if($mtime > 600 && $cache_rss != ''){ $cache_static = fopen($cache_file, 'wb'); fwrite($cache_static, $cache_rss); fclose($cache_static); } …so now, it will only write the file if it's over ten minutes old and there's actual content retrieved from the rss page. Do you think this will work?

    Read the article

  • How to map code points to unicode characters depending on the font used?

    - by Alex Schröder
    The client prints labels and has been using a set of symbolic (?) fonts to do this. The application uses a single byte database (Oracle with Latin-1). The old application I am replacing was not Unicode aware. It somehow did OK. The replacement application I am writing is supposed to handle the old data. The symbols picked from the charmap application often map to particular Unicode characters, but sometimes they don't. What looks like the Moon using the LAB3 font, for example, is in fact U+2014 (EM DASH). When users paste this character into a Swing text field, the character has the code point 8212. It was "moved" into the Private Use Area (by Windows? Java?). When saving this character to the database, Oracle decides that it cannot be safely encoded and replaces it with the dreaded ¿. Thus, I started shifting the characters by 8000: -= 8000 when saving, += 8000 when displaying the field. Unfortunately I discovered that other characters were not shifted by the same amount. In one particular font, for example, ž has the code point 382, so I shifted it by +/-256 to "fix" it. By now I'm dreading the discovery of more strange offsets and I wonder: Can I get at this mapping using Java? Perhaps the TTF font has a list of the 255 glyphs it encodes and what Unicode characters those correspond to and I can do it "right"? Right now I'm using the following kludge: static String fromDatabase(String str, String fontFamily) { if (str != null && fontFamily != null) { Font font = new Font(fontFamily, Font.PLAIN, 1); boolean changed = false; char[] chars = str.toCharArray(); for (int i = 0; i < chars.length; i++) { if (font.canDisplay(chars[i] + 0xF000)) { // WE8MSWIN1252 + WinXP chars[i] += 0xF000; changed = true; } else if (chars[i] >= 128 && font.canDisplay(chars[i] + 8000)) { // WE8ISO8859P1 + WinXP chars[i] += 8000; changed = true; } else if (font.canDisplay(chars[i] + 256)) { // ž in LAB1 Eastern = 382 chars[i] += 256; changed = true; } } if (changed) str = new String(chars); } return str; } static String toDatabase(String str, String fontFamily) { if (str != null && fontFamily != null) { boolean changed = false; char[] chars = str.toCharArray(); for (int i = 0; i < chars.length; i++) { int chr = chars[i]; if (chars[i] > 0xF000) { // WE8MSWIN1252 + WinXP chars[i] -= 0xF000; changed = true; } else if (chars[i] > 8000) { // WE8ISO8859P1 + WinXP chars[i] = (char) (chars[i] - 8000); changed = true; } else if (chars[i] > 256) { // ž in LAB1 Eastern = 382 chars[i] = (char) (chars[i] - 256); changed = true; } } if (changed) return new String(chars); } return str; }

    Read the article

  • MySQL query works in PHPMyAdmin but not PHP

    - by Su4p
    I do not understand what's happening. I have a query in PHP who crashes -with a strange error-. When I copy/paste the exact same request in PHPMyAdmin it works as expected. What am I doing wrong here ? SELECT oms_patient.id, oms_patient.date, oms_patient.date_modif, date_modif, AES_DECRYPT(nom,"xxxxx") AS "Nom", AES_DECRYPT(prenom,"xxxxx") AS "Prénom usuel", DATE_FORMAT(ddn, "%d/%m/%Y") AS "Date de naissance", villeNaissance AS "Lieu de naissance (ville)", CONCAT(oms_departement.libelle,"(",id_departement,")") AS "Lieu de vie", CONCAT(oms_pays.libelle,"(",id_pays,")") AS "Pays", CONCAT(patientsexe.libelle,"(",id_sexe,")") AS "Sexe", CONCAT(patientprofession.libelle,"(",id_profession,")") AS "Profession", IF(asthme>0,"Oui","Non") AS "Asthme", IF(rhinite>0,"Oui","Non") AS "Rhinite", IF(bcpo>0,"Oui","Non") AS "BPCO", IF(insuffisanceResp>0,"Oui","Non") AS "Insuffisance respiratoire chronique", IF(chirurgieOrl>0,"Oui","Non") AS "Chirurgie ORL du ronflement", IF(autreChirurgie>0,"Oui","Non") AS "Autre chirurgie ORL", IF(allergies>0,"Oui","Non") AS "Allergies", IF(OLD>0,"Oui","Non") AS "OLD", IF(hypertensionArterielle>0,"Oui","Non") AS "Hypertension artérielle", IF(infarctusMyocarde>0,"Oui","Non") AS "Infarctus du myocarde", IF(insuffisanceCoronaire>0,"Oui","Non") AS "Insuffisance coronaire", IF(troubleRythme>0,"Oui","Non") AS "Trouble du rythme", IF(accidentVasculaireCerebral>0,"Oui","Non") AS "Accident vasculaire cérébral", IF(insuffisanceCardiaque>0,"Oui","Non") AS "Insuffisance cardiaque", IF(arteriopathie>0,"Oui","Non") AS "Artériopathie", IF(tabagismeActuel>0,"Oui","Non") AS "Tabagisme actuel", CONCAT(nbPaquetsActuel," ","PA") AS "", IF(tabagismeAncien>0,"Oui","Non") AS "Tabagisme ancien", CONCAT(nbPaquetsAncien," ","PA") AS "", IF(alcool>0,"Oui","Non") AS "Alcool (conso régulière)", IF(refluxGastro>0,"Oui","Non") AS "Reflux gastro-oesophagien", IF(glaucome>0,"Oui","Non") AS "Glaucome", IF(diabete>0,"Oui","Non") AS "Diabète", CONCAT(patienttypeDiabete.libelle,"(",id_typeDiabete,")") AS "", IF(hypercholesterolemie>0,"Oui","Non") AS "Hypercholestérolémie", IF(hypertriglyceridemie>0,"Oui","Non") AS "Hypertriglycéridémie", IF(dysthyroidie>0,"Oui","Non") AS "Dysthyroïdie", IF(depression>0,"Oui","Non") AS "Dépression", IF(sedentarite>0,"Oui","Non") AS "Sédentarité", IF(syndromeDApneesSommeil>0,"Oui","Non") AS "SAS", IF(obesite>0,"Oui","Non") AS "Obésité", IF(dysmorphieFaciale>0,"Oui","Non") AS "Dysmorphie faciale", TextObservations AS "", id_user FROM oms_patient LEFT JOIN oms_departement ON oms_departement.id = id_departement LEFT JOIN oms_pays ON oms_pays.id = id_pays LEFT JOIN patientsexe ON patientsexe.id = id_sexe LEFT JOIN patientprofession ON patientprofession.id = id_profession LEFT JOIN patienttypeDiabete ON patienttypeDiabete.id = id_typeDiabete WHERE oms_patient.id=1 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'small"(conso régulière)", IF(refluxGastro0,"Oui","Non") as "Reflux ga' at line 1 "near 'small" <-- where is small o_O The PHP code isn't really relevant cause you won't see a lot. $db = mysql_connect(); mysql_select_db();//TODO SWITCH TO PDO mysql_query("SET NAMES UTF8"); $fields = $form->getFields($form); $settingsForm = $form->getSettings(); $sql = 'SELECT oms_patient.id,oms_patient.date,oms_patient.date_modif,'; foreach ($fields as $field) { if (!$field->isMultiSelect()) { $field->select_full(&$sql, 'oms_patient', null); } } if (isset($settingsForm['linkTo'])) { $idLinkTo = 'id_' . str_replace('oms_', '', $settingsForm['linkTo']); $sql .= $idLinkTo; } $sql.=' FROM oms_patient'; foreach ($fields as $field) { if (!$field->isMultiSelect() && $field->getTable('oms_patient')) { $sql .=' LEFT JOIN ' . $field->getTable('oms_patient') . ' ON ' . $field->getTable('oms_patient') . '.id = '.$field->getFieldName().' '; } } $sql.=' where oms_patient.id=' . $this->m_settings['e']; $result = mysql_query($sql) or die('Erreur SQL !<br>' . $sql . '<br>' . mysql_error()); $data = mysql_fetch_assoc($result); var_dump of $sql string(2663) "SELECT oms_patient.id,oms_patient.date,oms_patient.date_modif,date_modif,AES_DECRYPT(nom,"xxxxx") as "Nom",AES_DECRYPT("prenom","xxxxx") as "Prénom usuel",DATE_FORMAT(ddn, "%d/%m/%Y") as "Date de naissance",villeNaissance as "Lieu de naissance (ville)",CONCAT(oms_departement.libelle,"(",id_departement,")") as "Lieu de vie",CONCAT(oms_pays.libelle,"(",id_pays,")") as "Pays",CONCAT(patientsexe.libelle,"(",id_sexe,")") as "Sexe",CONCAT(patientprofession.libelle,"(",id_profession,")") as "Profession", IF"... can't go further to see what is in the output after the "..." <-- if you have an idea

    Read the article

  • Muti-Schema Privileges for a Table Trigger in an Oracle Database

    - by sisslack
    I'm trying to write a table trigger which queries another table that is outside the schema where the trigger will reside. Is this possible? It seems like I have no problem querying tables in my schema but I get: Error: ORA-00942: table or view does not exist when trying trying to query tables outside my schema. EDIT My apologies for not providing as much information as possible the first time around. I was under the impression this question was more simple. I'm trying create a trigger on a table that changes some fields on a newly inserted row based on the existence of some data that may or may not be in a table that is in another schema. The user account that I'm using to create the trigger does have the permissions to run the queries independently. In fact, I've had my trigger print the query I'm trying to run and was able to run it on it's own successfully. I should also note that I'm building the query dynamically by using the EXECUTE IMMEDIATE statement. Here's an example: CREATE OR REPLACE TRIGGER MAIN_SCHEMA.EVENTS BEFORE INSERT ON MAIN_SCHEMA.EVENTS REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW DECLARE rtn_count NUMBER := 0; table_name VARCHAR2(17) := :NEW.SOME_FIELD; key_field VARCHAR2(20) := :NEW.ANOTHER_FIELD; BEGIN CASE WHEN (key_field = 'condition_a') THEN EXECUTE IMMEDIATE 'select count(*) from OTHER_SCHEMA_A.'||table_name||' where KEY_FIELD='''||key_field||'''' INTO rtn_count; WHEN (key_field = 'condition_b') THEN EXECUTE IMMEDIATE 'select count(*) from OTHER_SCHEMA_B.'||table_name||' where KEY_FIELD='''||key_field||'''' INTO rtn_count; WHEN (key_field = 'condition_c') THEN EXECUTE IMMEDIATE 'select count(*) from OTHER_SCHEMA_C.'||table_name||' where KEY_FIELD='''||key_field||'''' INTO rtn_count; END CASE; IF (rtn_count > 0) THEN -- change some fields that are to be inserted END IF; END; The trigger seams to fail on the EXECUTE IMMEDIATE with the previously mentioned error. EDIT I have done some more research and I can offer more clarification. The user account I'm using to create this trigger is not MAIN_SCHEMA or any one of the OTHER_SCHEMA_Xs. The account I'm using (ME) is given privileges to the involved tables via the schema users themselves. For example (USER_TAB_PRIVS): GRANTOR GRANTEE TABLE_SCHEMA TABLE_NAME PRIVILEGE GRANTABLE HIERARCHY MAIN_SCHEMA ME MAIN_SCHEMA EVENTS DELETE NO NO MAIN_SCHEMA ME MAIN_SCHEMA EVENTS INSERT NO NO MAIN_SCHEMA ME MAIN_SCHEMA EVENTS SELECT NO NO MAIN_SCHEMA ME MAIN_SCHEMA EVENTS UPDATE NO NO OTHER_SCHEMA_X ME OTHER_SCHEMA_X TARGET_TBL SELECT NO NO And I have the following system privileges (USER_SYS_PRIVS): USERNAME PRIVILEGE ADMIN_OPTION ME ALTER ANY TRIGGER NO ME CREATE ANY TRIGGER NO ME UNLIMITED TABLESPACE NO And this is what I found in the Oracle documentation: To create a trigger in another user's schema, or to reference a table in another schema from a trigger in your schema, you must have the CREATE ANY TRIGGER system privilege. With this privilege, the trigger can be created in any schema and can be associated with any user's table. In addition, the user creating the trigger must also have EXECUTE privilege on the referenced procedures, functions, or packages. Here: Oracle Doc So it looks to me like this should work, but I'm not sure about the "EXECUTE privilege" it's referring to in the doc.

    Read the article

  • Problem using delete[] (Heap corruption) when implementing operator+= (C++)

    - by Darel
    I've been trying to figure this out for hours now, and I'm at my wit's end. I would surely appreciate it if someone could tell me when I'm doing wrong. I have written a simple class to emulate basic functionality of strings. The class's members include a character pointer data (which points to a dynamically created char array) and an integer strSize (which holds the length of the string, sans terminator.) Since I'm using new and delete, I've implemented the copy constructor and destructor. My problem occurs when I try to implement the operator+=. The LHS object builds the new string correctly - I can even print it using cout - but the problem comes when I try to deallocate the data pointer in the destructor: I get a "Heap Corruption Detected after normal block" at the memory address pointed to by the data array the destructor is trying to deallocate. Here's my complete class and test program: #include <iostream> using namespace std; // Class to emulate string class Str { public: // Default constructor Str(): data(0), strSize(0) { } // Constructor from string literal Str(const char* cp) { data = new char[strlen(cp) + 1]; char *p = data; const char* q = cp; while (*q) *p++ = *q++; *p = '\0'; strSize = strlen(cp); } Str& operator+=(const Str& rhs) { // create new dynamic memory to hold concatenated string char* str = new char[strSize + rhs.strSize + 1]; char* p = str; // new data char* i = data; // old data const char* q = rhs.data; // data to append // append old string to new string in new dynamic memory while (*p++ = *i++) ; p--; while (*p++ = *q++) ; *p = '\0'; // assign new values to data and strSize delete[] data; data = str; strSize += rhs.strSize; return *this; } // Copy constructor Str(const Str& s) { data = new char[s.strSize + 1]; char *p = data; char *q = s.data; while (*q) *p++ = *q++; *p = '\0'; strSize = s.strSize; } // destructor ~Str() { delete[] data; } const char& operator[](int i) const { return data[i]; } int size() const { return strSize; } private: char *data; int strSize; }; ostream& operator<<(ostream& os, const Str& s) { for (int i = 0; i != s.size(); ++i) os << s[i]; return os; } // Test constructor, copy constructor, and += operator int main() { Str s = "hello"; // destructor for s works ok Str x = s; // destructor for x works ok s += "world!"; // destructor for s gives error cout << s << endl; cout << x << endl; return 0; }

    Read the article

  • Opinions on Copy Protection / Software Licensing via phoning home?

    - by Jakobud
    I'm developing some software that I'm going to eventually sell. I've been thinking about different copy protection mechanisms, both custom and 3rd party. I know that no copy protection is 100% full-proof, but I need to at least try. So I'm looking for some opinions to my approach I'm thinking about: One method I'm thinking about is just having my software connect to a remote server when it starts up, in order to verify the license based off the MAC address of the ethernet port. I'm not sure if the server would be running a MySQL database that retrieves the license information, or what... Is there a more simple way? Maybe some type of encrypted file that is read? I would make the software still work if it can't connect to the server. I don't want to lock someone out just because they don't have internet access at that moment in time. In case you are wondering, the software I'm developing is extremely internet/network dependant. So its actually quite unlikely that the user wouldn't have internet access when using it. Actually, its pretty useless without internet/network access. Anyone know what I would do about computers that have multiple MAC addresses? A lot of motherboards these days have 2 ethernet ports. And most laptops have 1 ethernet, 1 wifi and Bluetooth MAC addresses. I suppose I could just pick a MAC port and run with it. Not sure if it really matters A smarty and tricky user could determine the server that the software is connecting to and perhaps add it to their host file so that it always trys to connect to localhost. How likely do you think this is? And do you think its possible for the software to check if this is being done? I guess parsing of the host file could always work. Look for your server address in there and see if its connecting to localhost or something. I've considered dongles, but I'm trying to avoid them just because I know they are a pain to work with. Keeping them updated and possibly requiring the customer to run their own license server is a bit too much for me. I've experienced that and it's a bit of a pain that I wouldn't want to put my customers through. Also I'm trying to avoid that extra overhead cost of using 3rd party dongles. Also, I'm leaning toward connecting to a remote server to verify authentication as opposed to just sending the user some sort of license file because what happens when the user buys a new computer? I have to send them a replacement license file that will work with their new computer, but they will still be able to use it on their old computer as well. There is no way for me to 'de-authorize' their old computer without asking them to run some program on it or something. Also, one important note, with the software I would make it very clear to the user in the EULA that the software connects to a remote server to verify licensing and that no personal information is sent. I know I don't care much for software that does that kinda stuff without me knowing. Anyways, just looking for some opinions for people who have maybe gone down this kinda road. It seems like remote-server-dependent-software would be one of the most effective copy-protection mechanisms, not just because of difficulty of circumventing, but also could be pretty easy to manage the licenses on the developers end.

    Read the article

  • Visual Studio 2010 SP1 Beta supports IIS Express

    - by DigiMortal
    Visual Studio 2010 SP1 Beta and ASP.NET MVC 3 RC2 were both announced today. I made a little test on one of my web applications to see how Visual Studio 2010 works with IIS Express. In this posting I will show you how to make your ASP.NET MVC 3 application work with IIS Express. Installing new stuff You can install IIS Express using Web Platform Installer. It is not part of WebMatrix anymore and you can just install IIS Express without WebMatrix. NB! You have to install IIS Express using Web Platform installer because IIS Express is not installed by SP1. After installing Visual Studio 2010 SP1 Beta on my machine (it took a long-long-long time to install) I installed also ASP.NET MVC 3 RC2. If you have Async CTP installed on your machine you have to uninstall it to get ASP.NET MVC 3 RC2 installed and run without problems. Screenshot on right shows what kinf of horrors my old laptop had to survive to get all new stuff installer. Setting IIS Express as server for web application Now, when you right-click on some web project you should see new menu item in context menu – Use IIS Express…. If you click on it you are asked for confirmation and if you say Yes then your web application is reconfigured to use IIS Express. After configuration you will see dialog box like this. And you are done. You can run your application now. Running web application When you run your application it is run on IIS Express. You can see IIS Express icon on taskbar and when you click it you can open IIS Express settings. If you closed your application in browser you can open it again from IIS Express icon. Modifying IIS Express settings for web application You can modify IIS Express settings for your application. Just open your project properties and move to Web tab. IIS and IIS Express are using same settings. The difference is if you make check to Use IIS Express checkbox or not. Switching back to Visual Studio Development Server If you don’t want or you can’t use IIS Express for some reason you can easily switch back to Visual Studio Development Server. Just right-click on your web application project and select Use Visual Studio Development Server from context menu. Conclusion IIS Express is more independent than full version of IIS and it can be also installed and run on machines where are very strict rules (some corporate and academic environments by example). IIS Express was previously part of WebMatrix package but now it is separate product and Visual Studio 2010 has very nice support for it thanks to SP1. You can easily make your web applications use IIS Express and if you want to switch back to development server it is also very easy.

    Read the article

  • Inspire Geek Love with These Hilarious Geek Valentines

    - by Eric Z Goodnight
    Want to send some Geek Love to that special someone? Why not do it with these elementary school throwback valentines, and win their heart this upcoming Valentine’s day—the geek way! Read on to see the simple method to make your own custom Valentines, as well as download a set of eleven ready-made ones any geek guy or gal should be delighted get. It’s amore! How to Make Custom Valentines A size we’ve used for all of our Valentines is a 3” x 4” at 150 dpi. This is fairly low resolution for print, but makes a great graphic to email. With your new image open, Navigate to Edit > Fill and fill your background layer with a rich, red color (or whatever appeals to you.) By setting “Use” to “Foreground color as shown above, you’ll paint whatever foreground color you have in your color picker. Press to select the text tool. Set a few text objects, using whatever fonts appeal to you. Pixel fonts, like this one, are freely downloadable, and we’ve already shared a great list of Valentines fonts. Copy an image from the internet if you’re confident your sweetie won’t mind a bit of fair use of copyrighted imagery. If they do mind, find yourself some great Creative Commons images. to do a free transform on your image, sizing it to whatever dimensions work best for your design. Right click your newly added image layer in your panel and Choose “Blending Effects” to pick a Layer Style. “Stroke” with this setting adds a black line around your image. Also turning on “Outer Glow” with this setting puts a dark black shadow around the top and bottom (and sides, although they are hidden). Add some more text. Double entendre is recommended. Click and hold down on the “Rectangle Tool” to get the “Custom Shape Tool.” The custom shape tool has useful vector shapes built into it. Find the “Shape” dropdown in the menu to find the heart image. Click and drag to create a vector heart shape in your image. Your layers panel is where you can change the color, if it happens to use the wrong one at first. Click the color swatch in your panel, highlighted in blue above. will transform your vector heart. You can also use it to rotate, if you like. Add some details, like this Power or Standby symbol, which can be found in symbol fonts, taken from images online, or drawn by hand. Your Valentine is now ready to be saved as a JPG or PNG and sent to the object of your affection! Keep reading to see a list of 11 downloadable How-To Geek Valentines, including this one and the three from the header image. Download The HTG Set of Valentines Download the HTG Geek Valentines (ZIP) Download the HTG Geek Valentines (ZIP) When he’s not wooing ladies with Valentines cards, you can email the author at [email protected] with your Photoshop and Graphics questions. Your questions may be featured in a future How-To Geek article! Latest Features How-To Geek ETC Inspire Geek Love with These Hilarious Geek Valentines How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin How to Kid Proof Your Computer’s Power and Reset Buttons Microsoft’s Windows Media Player Extension Adds H.264 Support Back to Google Chrome Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7

    Read the article

  • Special Activities in the OTN Lounge

    - by Bob Rhubart
    What is the OTN Lounge? It's the place for Oracle OpenWorld and JavaOne attendees to hang out, get off your feet, rest up between sessions, recharge your laptop, tablet, or phone, connect with other community members, pick the brains of subject matter experts and community leaders, enjoy some refreshments (coffee and soft drinks in the morning, beer in the afternoon), and avoid the crowds by watching keynote presentations on a plasma screen. But in addition to general chillaxin' the OTN Lounge also hosts several special activities throughout the week… OTN Lounge Special Activities Sunday Oracle Social Network Developer Challenge Kick-off   (7:00pm - 8:30pm)Want to learn more about Oracle Social Network? Love working with APIs? Enter the Oracle Social Network Developer Challenge and build your dream integration with Oracle's secure, purposeful social network for business. Demonstrate your skills, work with the latest and greatest and compete for $500 in Amazon gift cards. Go to theappslab.com/osnregisterr Read and agree to the terms and rules. Register yourself with your name, corporate email address, and company. Watch your inbox for a confirmation email from Oracle Social Network. Start coding (individual or teams welcome) Show off your work to the judges in the OTN Lounge, Wednesday, 4:00pm - 6:00pm Monday (Lounge hours: 8:00am - 7:00pm) RAC Attack (9:00am - 1:00pm) Learn about Oracle Real Application Clustering (RAC) in this collaborative event. You'll work with experts from the IOUG RAC SIG to get an Oracle Database 11gR2 RAC cluster running inside a virtual machine. For more information: RAC attack at Oracle Open World (Pythian Blog) RAC Attack - Oracle Cluster Database at Home/Events (WikiBooks) Oracle Social Network Developer Challenge Office Hours (4:00pm - 8:00pm)Meet the people behind Oracle Social Network. Tuesday (Lounge hours: 8:00am - 7:00pm) RAC Attack (9:00am - 1:00pm) Oracle Social Network Developer Challenge Office Hours (4:30pm - 8:00pm) Oracle Database / Oracle Fusion Middleware Tweet Meet (4:30pm - 6:00pm) Free as in beer! Oracle Database and Oracle Fusion Middleware tweeters, gather in the OTN Lounge for refreshments and conversation with fellow tweeters and Oracle Database and Middleware experts. Wednesday (Lounge Hours: 8:00am - 6:00pm) RAC Attack (9:00am - 1:00pm) Oracle Social Network Developer Challenge Judging (4:00pm - 6:00pm) ADF Oracle ADF / Oracle Fusion Middleware Meet-up (4:30pm - 5:30pm) Join other Oracle ADF and Oracle Fusion Middleware developers and meet the product managers and engineers behind Oracle ADF, ADF Mobile, and ADF Essentials. Did we mention free beer? Thursday (Lounge Hours: 8:00am - 2:00pm) RAC Attack (9:00am - 1:00pm) The OTN Lounge is located in the Howard St .tent, located by no small coincidence on Howard St. between 3rd and 4th, directly between Moscone North and Moscone South. An Oracle OpenWorld or JavaOne conference badge is required for access to the OTN Lounge.

    Read the article

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue

    - by John-Brown.Evans
    JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue ol{margin:0;padding:0} .c11_4{vertical-align:top;width:129.8pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c9_4{vertical-align:top;width:207pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt}.c14{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c17_4{vertical-align:top;width:129.8pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c7_4{vertical-align:top;width:130pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c19_4{vertical-align:top;width:468pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c22_4{background-color:#ffffff} .c20_4{list-style-type:disc;margin:0;padding:0} .c6_4{font-size:8pt;font-family:"Courier New"} .c24_4{color:inherit;text-decoration:inherit} .c23_4{color:#1155cc;text-decoration:underline} .c0_4{height:11pt;direction:ltr} .c10_4{font-size:10pt;font-family:"Courier New"} .c3_4{padding-left:0pt;margin-left:36pt} .c18_4{font-size:8pt} .c8_4{text-align:center} .c12_4{background-color:#ffff00} .c2_4{font-weight:bold} .c21_4{background-color:#00ff00} .c4_4{line-height:1.0} .c1_4{direction:ltr} .c15_4{background-color:#f3f3f3} .c13_4{font-family:"Courier New"} .c5_4{font-style:italic} .c16_4{border-collapse:collapse} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:bold;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:11pt;font-family:"Arial";padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-style:italic;font-size:10pt;font-family:"Arial";padding-bottom:0pt} This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue In this example we will create a BPEL process which will write (enqueue) a message to a JMS queue using a JMS adapter. The JMS adapter will enqueue the full XML payload to the queue. This sample will use the following WebLogic Server objects. The first two, the Connection Factory and JMS Queue, were created as part of the first blog post in this series, JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g. If you haven't created those objects yet, please see that post for details on how to do so. The Connection Pool will be created as part of this example. Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue 1. Verify Connection Factory and JMS Queue As mentioned above, this example uses a WLS Connection Factory called TestConnectionFactory and a JMS queue TestJMSQueue. As these are prerequisites for this example, let us verify they exist. Log in to the WebLogic Server Administration Console. Select Services > JMS Modules > TestJMSModule You should see the following objects: If not, or if the TestJMSModule is missing, please see the abovementioned article and create these objects before continuing. 2. Create a JMS Adapter Connection Pool in WebLogic Server The BPEL process we are about to create uses a JMS adapter to write to the JMS queue. The JMS adapter is deployed to the WebLogic server and needs to be configured to include a connection pool which references the connection factory associated with the JMS queue. In the WebLogic Server Console Go to Deployments > Next and select (click on) the JmsAdapter Select Configuration > Outbound Connection Pools and expand oracle.tip.adapter.jms.IJmsConnectionFactory. This will display the list of connections configured for this adapter. For example, eis/aqjms/Queue, eis/aqjms/Topic etc. These JNDI names are actually quite confusing. We are expecting to configure a connection pool here, but the names refer to queues and topics. One would expect these to be called *ConnectionPool or *_CF or similar, but to conform to this nomenclature, we will call our entry eis/wls/TestQueue . This JNDI name is also the name we will use later, when creating a BPEL process to access this JMS queue! Select New, check the oracle.tip.adapter.jms.IJmsConnectionFactory check box and Next. Enter JNDI Name: eis/wls/TestQueue for the connection instance, then press Finish. Expand oracle.tip.adapter.jms.IJmsConnectionFactory again and select (click on) eis/wls/TestQueue The ConnectionFactoryLocation must point to the JNDI name of the connection factory associated with the JMS queue you will be writing to. In our example, this is the connection factory called TestConnectionFactory, with the JNDI name jms/TestConnectionFactory.( As a reminder, this connection factory is contained in the JMS Module called TestJMSModule, under Services > Messaging > JMS Modules > TestJMSModule which we verified at the beginning of this document. )Enter jms/TestConnectionFactory  into the Property Value field for Connection Factory Location. After entering it, you must press Return/Enter then Save for the value to be accepted. If your WebLogic server is running in Development mode, you should see the message that the changes have been activated and the deployment plan successfully updated. If not, then you will manually need to activate the changes in the WebLogic server console. Although the changes have been activated, the JmsAdapter needs to be redeployed in order for the changes to become effective. This should be confirmed by the message Remember to update your deployment to reflect the new plan when you are finished with your changes as can be seen in the following screen shot: The next step is to redeploy the JmsAdapter.Navigate back to the Deployments screen, either by selecting it in the left-hand navigation tree or by selecting the “Summary of Deployments” link in the breadcrumbs list at the top of the screen. Then select the checkbox next to JmsAdapter and press the Update button On the Update Application Assistant page, select “Redeploy this application using the following deployment files” and press Finish. After a few seconds you should get the message that the selected deployments were updated. The JMS adapter configuration is complete and it can now be used to access the JMS queue. To summarize: we have created a JMS adapter connection pool connector with the JNDI name jms/TestConnectionFactory. This is the JNDI name to be accessed by a process such as a BPEL process, when using the JMS adapter to access the previously created JMS queue with the JNDI name jms/TestJMSQueue. In the following step, we will set up a BPEL process to use this JMS adapter to write to the JMS queue. 3. Create a BPEL Composite with a JMS Adapter Partner Link This step requires that you have a valid Application Server Connection defined in JDeveloper, pointing to the application server on which you created the JMS Queue and Connection Factory. You can create this connection in JDeveloper under the Application Server Navigator. Give it any name and be sure to test the connection before completing it. This sample will use the connection name jbevans-lx-PS5, as that is the name of the connection pointing to my SOA PS5 installation. When using a JMS adapter from within a BPEL process, there are various configuration options, such as the operation type (consume message, produce message etc.), delivery mode and message type. One of these options is the choice of the format of the JMS message payload. This can be structured around an existing XSD, in which case the full XML element and tags are passed, or it can be opaque, meaning that the payload is sent as-is to the JMS adapter. In the case of an XSD-based message, the payload can simply be copied to the input variable of the JMS adapter. In the case of an opaque message, the JMS adapter’s input variable is of type base64binary. So the payload needs to be converted to base64 binary first. I will go into this in more detail in a later blog entry. This sample will pass a simple message to the adapter, based on the following simple XSD file, which consists of a single string element: stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://www.example.org" targetNamespace="http://www.example.org" elementFormDefault="qualified" <xsd:element name="exampleElement" type="xsd:string"> </xsd:element> </xsd:schema> The following steps are all executed in JDeveloper. The SOA project will be created inside a JDeveloper Application. If you do not already have an application to contain the project, you can create a new one via File > New > General > Generic Application. Give the application any name, for example JMSTests and, when prompted for a project name and type, call the project JmsAdapterWriteWithXsd and select SOA as the project technology type. If you already have an application, continue below. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterWriteSchema. When prompted for the composite type, choose Composite With BPEL Process. When prompted for the BPEL Process, name it JmsAdapterWriteSchema too and choose Synchronous BPEL Process as the template. This will create a composite with a BPEL process and an exposed SOAP service. Double-click the BPEL process to open and begin editing it. You should see a simple BPEL process with a Receive and Reply activity. As we created a default process without an XML schema, the input and output variables are simple strings. Create an XSD File An XSD file is required later to define the message format to be passed to the JMS adapter. In this step, we create a simple XSD file, containing a string variable and add it to the project. First select the xsd item in the left-hand navigation tree to ensure that the XSD file is created under that item. Select File > New > General > XML and choose XML Schema. Call it stringPayload.xsd and when the editor opens, select the Source view. then replace the contents with the contents of the stringPayload.xsd example above and save the file. You should see it under the xsd item in the navigation tree. Create a JMS Adapter Partner Link We will create the JMS adapter as a service at the composite level. If it is not already open, double-click the composite.xml file in the navigator to open it. From the Component Palette, drag a JMS adapter over onto the right-hand swim lane, under External References. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterWrite Oracle Enterprise Messaging Service (OEMS): Oracle Weblogic JMS AppServer Connection: Use an existing application server connection pointing to the WebLogic server on which the above JMS queue and connection factory were created. You can use the “+” button to create a connection directly from the wizard, if you do not already have one. This example uses a connection called jbevans-lx-PS5. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Produce Message Operation Name: Produce_message Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created earlier. JNDI Name: The JNDI name to use for the JMS connection. This is probably the most important step in this exercise and the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) MessagesURL: We will use the XSD file we created earlier, stringPayload.xsd to define the message format for the JMS adapter. Press the magnifying glass icon to search for schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string. Press Next and Finish, which will complete the JMS Adapter configuration. Wire the BPEL Component to the JMS Adapter In this step, we link the BPEL process/component to the JMS adapter. From the composite.xml editor, drag the right-arrow icon from the BPEL process to the JMS adapter’s in-arrow. This completes the steps at the composite level. 4. Complete the BPEL Process Design Invoke the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterWriteSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterWrite partner link under one of the two swim lanes. We want it in the right-hand swim lane. If JDeveloper displays it in the left-hand lane, right-click it and choose Display > Move To Opposite Swim Lane. An Invoke activity is required in order to invoke the JMS adapter. Drag an Invoke activity between the Receive and Reply activities. Drag the right-hand arrow from the Invoke activity to the JMS adapter partner link. This will open the Invoke editor. The correct default values are entered automatically and are fine for our purposes. We only need to define the input variable to use for the JMS adapter. By pressing the green “+” symbol, a variable of the correct type can be auto-generated, for example with the name Invoke1_Produce_Message_InputVariable. Press OK after creating the variable. ( For some reason, while I was testing this, the JMS Adapter moved back to the left-hand swim lane again after this step. There is no harm in leaving it there, but I find it easier to follow if it is in the right-hand lane, because I kind-of think of the message coming in on the left and being routed through the right. But you can follow your personal preference here.) Assign Variables Drag an Assign activity between the Receive and Invoke activities. We will simply copy the input variable to the JMS adapter and, for completion, so the process has an output to print, again to the process’s output variable. Double-click the Assign activity and create two Copy rules: for the first, drag Variables > inputVariable > payload > client:process > client:input_string to Invoke1_Produce_Message_InputVariable > body > ns2:exampleElement for the second, drag the same input variable to outputVariable > payload > client:processResponse > client:result This will create two copy rules, similar to the following: Press OK. This completes the BPEL and Composite design. 5. Compile and Deploy the Composite We won’t go into too much detail on how to compile and deploy. In JDeveloper, compile the process by pressing the Make or Rebuild icons or by right-clicking the project name in the navigator and selecting Make... or Rebuild... If the compilation is successful, deploy it to the SOA server connection defined earlier. (Right-click the project name in the navigator, select Deploy to Application Server, choose the application server connection, choose the partition on the server (usually default) and press Finish. You should see the message ---- Deployment finished. ---- in the Deployment frame, if the deployment was successful. 6. Test the Composite This is the exciting part. Open two tabs in your browser and log in to the WebLogic Administration Console in one tab and the Enterprise Manager 11g Fusion Middleware Control (EM) for your SOA installation in the other. We will use the Console to monitor the messages being written to the queue and the EM to execute the composite. In the Console, go to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. Note the number of messages under Messages Current. In the EM, go to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterWriteSchema [1.0], then press the Test button. Under Input Arguments, enter any string into the text input field for the payload, for example Test Message then press Test Web Service. If the instance is successful you should see the same text in the Response message, “Test Message”. In the Console, refresh the Monitoring screen to confirm a new message has been written to the queue. Check the checkbox and press Show Messages. Click on the newest message and view its contents. They should include the full XML of the entered payload. 7. Troubleshooting If you get an exception similar to the following at runtime ... BINDING.JCA-12510 JCA Resource Adapter location error. Unable to locate the JCA Resource Adapter via .jca binding file element The JCA Binding Component is unable to startup the Resource Adapter specified in the element: location='eis/wls/QueueTest'. The reason for this is most likely that either 1) the Resource Adapters RAR file has not been deployed successfully to the WebLogic Application server or 2) the '' element in weblogic-ra.xml has not been set to eis/wls/QueueTest. In the last case you will have to add a new WebLogic JCA connection factory (deploy a RAR). Please correct this and then restart the Application Server at oracle.integration.platform.blocks.adapter.fw.AdapterBindingException. createJndiLookupException(AdapterBindingException.java:130) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.createJCAConnectionFactory (JCAConnectionManager.java:1387) at oracle.integration.platform.blocks.adapter.fw.jca.cci. JCAConnectionManager$JCAConnectionPool.newPoolObject (JCAConnectionManager.java:1285) ... then this is very likely due to an incorrect JNDI name entered for the JMS Connection in the JMS Adapter Wizard. Recheck those steps. The error message prints the name of the JNDI name used. In this example, it was incorrectly entered as eis/wls/QueueTest instead of eis/wls/TestQueue. This concludes this example. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Developer Training – Importance and Significance – Part 1

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Beginning in Real World However, many former-students will be disappointed to find out that once they become employees, learning is not over.  Many companies are discovering the importance and benefits to training their employees.  You can breathe a sigh of relief, though, because much for this kind of training there are not usually tests! We often think that we go to school for our younger years so that we do all our learning all at once, and then for the rest of our lives we use that knowledge.  But in so many cases, but especially for developers, the opposite is true.  It takes many years of schools to learn the basics of a field, and then our careers are spent learning to become experts. For this, and so many other reasons, training is very important.  Example one: developer training leads to better employees.  A company is only as good as the people it employs, and one way to ensure that you have employed the right candidate is through training.  Training can take a regular “stone” and polish it into a “diamond.”  Employees who have been well-trained will be better at their jobs and produce a better product. Most Expensive Resource Did you know that one of the most expensive operating costs for any company is not buying goods, or advertising, but its employees – especially having to hire new employees.  Bringing in new people, getting them up to speed, and providing them with perks to attract them to a company is a huge cost for companies.  So employee retention – keep the employees you already have, and keeping them happy – is incredibly important from a business aspect.  And research shows that a well-trained employee is a happy employee.  They feel more confident in their job, happier with their position, and more cared-about – and therefore less likely to leave in search of a better job.  Employee training leads to better retention. Good Moral On the subject of keeping employees happy in order to keep them at a company, the complement to that research shows that happier employees are more efficient and overall better at their jobs.  You don’t have to be a scientist to figure out why this is true.  An employee who feel that his company cares about him and his educational future will work harder for the company.  He or she will put in that extra hour during the busy season that makes all the difference in the end.  Good morale is good for the company. If good morale is better for the company, you know that it goes hand-in-hand with something even better – better efficiency.  An employee who is well trained obviously knows more about their job and all the technical aspects.  That means when a problem crops up – and they inevitably do – this employee will be well-equipped to deal with that problem with fewer problems, and no need to go searching for help from higher up.  When employees are well trained, companies run more smoothly. A Better Product Of course, all of these “pros” for employee training are leading up to the one thing that companies truly care about – a better product.  We have shown that employees who have been trained to be competitive in the market are happier at the company, they are more efficient, and their morale is better.  The overall result is that the company’s product – whether it is a database, piece of equipment, or even a physical good – is better.  And a better product will always be more competitive on the market. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle Unveils Oracle Social Relationship Management Suite at Oracle OpenWorld

    - by Richard Lefebvre
    New Service Enables Companies to Listen, Engage, Create, Market and Analyze Interactions across Multiple Social Platforms in Real-Time During his keynote presentation, Oracle CEO Larry Ellison announced the Oracle Social Relationship Management (SRM) Suite.   Oracle Social Relationship Management Suite is an integrated enterprise service that enables companies to listen, engage, create, market, and analyze interactions across multiple social platforms in real-time providing a holistic view of the consumer.   Oracle Social Relationship Management Suite is integrated with Oracle’s enterprise applications, including Oracle Fusion Marketing, Oracle Fusion Sales Catalog, Oracle ATG Web Commerce, and Oracle Enterprise Resource Planning (ERP), allowing organizations to use social to transform their corporate business processes and systems.   Additionally, Oracle Social Relationship Management Suite is integrated with Oracle Platform Services, including Oracle Java Cloud Service and Oracle Database Cloud Service, enabling marketing teams to integrate social with their custom Web pages, landing pages and marketing tools. Unleashing the Power of Social • Providing a holistic view of consumer interactions, Oracle Social Relationship Management Suite includes: Oracle Social Network (OSN): Provides a secure collaboration platform that supports real-time collaboration and networking for users inside and outside the organization. Oracle Social Marketing: Enables marketers to centrally create, publish, moderate, manage, measure and report across multiple social campaigns and platforms. It also helps marketers publish social content, engage fans and customize their brand's look and feel. Oracle Social Engagement & Monitoring Cloud Service: Enables organizations to analyze social media interactions while also empowering customer service and sales teams to effectively engage with customers and prospects. It gives organizations the tools they need to understand customers and take the appropriate actions by monitoring, listening, learning, and responding to signals and trends across the social web. Oracle Social Sites: provides brands and agencies a powerful and rich editing experience that end users can leverage to dynamically develop and launch social sites. Oracle Data and Insights. A service that caters to a growing enterprise need for externally information by providing information, directory and insights about common business entities. Supporting Quote “By fundamentally changing the way organizations connect with their different stakeholders, social is changing the rules of business,” said Thomas Kurian, executive vice president, Oracle Product Development. “With the Oracle Social Relationship Management Suite we are empowering our customers to embrace this change by integrating the tools required to listen, engage, create, market and analyze social interactions into existing applications and services.”

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >