Search Results

Search found 28222 results on 1129 pages for 'machine config'.

Page 39/1129 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Wait for linux machine to be rebooted

    - by Theo
    I have a small script to install on my remote machine an update. I would like to reboot the machine remotely and if it is rebooted, continue with some more commands. What I currently do is: ssh root@myMachine << COMMANDS_ISSUED ###... Tasks init 6 COMMANDS_ISSUED sleep 180s ssh root@myMachine << POST_REBOOT_COMMANDS ###.... More stuff POST_REBOOT_COMMANDS Is there a more elegant way to do it? Like pinging the machine all 5 seconds up to a maximum of 4 minutes? I play with a few linux machines which have different boot up times and if my script would continue immediately after reboot, this could safe quite some time for me. (Note: I don't want to parallelize execution over all machines as I want to see for each machine if everything worked fine)

    Read the article

  • WCF service not working after program update

    - by Boesj
    I have recently added a WCF service reference to my program. When I perform a clean install of this program, everything seems to work as expected. But, when I install the program on a client which already has a previous version (without the new service reference) installed, I get a exception telling me the default endpoint for this particular service could not be found. It seems that the appname.exe.config is not being updated with the new endpoint settings. Is there any reason for this and how can I force the installer to overwrite the config file? I'm using the default Visual Studio 2008 installer project with RemovePreviousVersions set to True. Update: My program encrypts the settings section after the first run with the following code Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); ConfigurationSection section = config.GetSection(sectionKey); if (section != null) { if (!section.SectionInformation.IsProtected) { if (!section.ElementInformation.IsLocked) { section.SectionInformation.ProtectSection("DataProtectionConfigurationProvider"); section.SectionInformation.ForceSave = true; config.Save(ConfigurationSaveMode.Full); } } } When I do not run the program before installing the new version the app.config gets updated.

    Read the article

  • Deploy ASP.NET Web Applications with Web Deployment Projects

    - by Ben Griswold
    One may quickly build and deploy an ASP.NET web application via the Publish option in Visual Studio.  This option works great for most simple deployment scenarios but it won’t always cut it.  Let’s say you need to automate your deployments. Or you have environment-specific configuration settings. Or you need to execute pre/post build operations when you do your builds.  If so, you should consider using Web Deployment Projects. The Web Deployment Project type doesn’t come out-of-the-box with Visual Studio 2008.  You’ll need to Download Visual Studio® 2008 Web Deployment Projects – RTW and install if you want to follow along with this tutorial. I’ve created a shiny new ASP.NET MVC project.  Web Deployment Projects work with websites, web applications and MVC projects so feel free to go with any web project type you’d like.  Once your web application is in place, it’s time to add the Web Deployment project.  You can hunt and peck around the File > New > New Project… dialogue as long as you’d like, but you aren’t going to find what you need.  Instead, select the web project and then choose the “Add Web Deployment Project…” hiding behind the Build menu option. I prefer to name my projects based on the environment in which I plan to deploy.  In this case, I’ll be rolling to the QA machine. Don’t expect too much to happen at this point.  A seemingly empty project with a funny icon will be added to your solution.  That’s it. I want to take a minute and talk about configuration settings before we continue.  Some of the common settings which might change from environment to environment are appSettings, connectionStrings and mailSettings.  Here’s a look at my updated web.config: <appSettings>   <add key="MvcApplication293.Url" value="http://localhost:50596/" />     </appSettings> <connectionStrings>   <add name="ApplicationServices"        connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true"        providerName="System.Data.SqlClient"/> </connectionStrings>   <system.net>   <mailSettings>     <smtp from="[email protected]">         <network host="server.com" userName="username" password="password" port="587" defaultCredentials="false"/>     </smtp>   </mailSettings> </system.net> I want to update these values prior to deploying to the QA environment.  There are variations to this approach, but I like to maintain environment-specific settings for each of the web.config sections in the Config/[Environment] project folders.  I’ve provided a screenshot of the QA environment settings below. It may be obvious what one should include in each of the three files.  Basically, it is a copy of the associated web.config section with updated setting values.  For example, the AppSettings.config file may include a reference to the QA web url, the DB.config would include the QA database server and login information and the StmpSettings.config would include a QA Stmp server and user information. <?xml version="1.0" encoding="utf-8" ?> <appSettings>   <add key="MvcApplication293.Url" value="http://qa.MvcApplicatinon293.com/" /> </appSettings> AppSettings.config  <?xml version="1.0" encoding="utf-8" ?> <connectionStrings>   <add name="ApplicationServices"        connectionString="server=QAServer;integrated security=SSPI;database=MvcApplication293"        providerName="System.Data.SqlClient"/>   </connectionStrings> Db.config  <?xml version="1.0" encoding="utf-8" ?> <smtp from="[email protected]">     <network host="qaserver.com" userName="qausername" password="qapassword" port="587" defaultCredentials="false"/> </smtp> SmtpSettings.config  I think our web project is ready to deploy.  Now, it’s time to concentrate on the Web Deployment Project itself.  Right-click on the project file and open the Property Pages. The first thing to call out is the Configuration dropdown.  I only deploy a project which is built in Release Mode so I only setup the Web Deployment Project for this mode.  (This is when you change the Configuration selection to “Release.”)  I typically keep the Output Folder default value – .\Release\.  When the application is built, all artifacts will be dropped in the .\Release\ folder relative to the Web Deployment Project root.  The final option may be up for some debate.  I like to roll out updatable websites so I select the “Allow this precompiled site to be updatable” option.  I really do like to follow standard SDLC processes when I release my software but there are those times when you just have to make a hotfix to production and I like to keep this option open if need be.  If you are strongly opposed to this idea, please, by all means, don’t check the box. The next tab is boring.  I don’t like to deploy a crazy number of DLLs so I merge all outputs to a single assembly.  Again, you may have another option and feel free to change this selection if you so wish. If you follow my lead, take care when choosing a single assembly name.  The Assembly Name can not be the same as the website or any other project in your solution otherwise you’ll receive a circular reference build error.  In other words, I can’t name the assembly MvcApplication293 or my output window would start yelling at me. Remember when we called out our QA configuration files?  Click on the Deployment tab and you’ll see how where going to use them.  Notice the Web.config file section replacements value.  All this does is swap called out web.config sections with the content of the Config\QA\* files.  You can reduce or extend this list as you deem fit.  Did you see the “Use external configuration source file” option?  You know how you can point any of your web.config sections to an external file via the configSource attribute?  This option allows you to leverage that technique and instead of replacing the content of the sections, you will replace the configSource attribute value instead. <appSettings configSource="Config\QA\AppSettings.config" /> Go ahead and Apply your changes.  I’d like to take a look at the project file we just updated.  Right-click on the Web Deployment Project and select “Open Project File.” One of the first configuration blocks reflects core Release build settings.  There are a couple of points I’d like to call out here: DebugSymbols=false ensures the compilation debug attribute in your web.config is flipped to false as part of build process.  There’s some crumby (more likely old) documentation which implies you need a ToggleDebugCompilation task to make this happen.  Nope. Just make sure the DebugSymbols is set to false.  EnableUpdateable implies a single dll for the web application rather than a dll for each object and and empty view file. I think updatable applications are cleaner and include the benefit (or risk based on your perspective) that portions of the application can be updated directly on the server.  I called this out earlier but I wanted to reiterate. <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">     <DebugSymbols>false</DebugSymbols>     <OutputPath>.\Release</OutputPath>     <EnableUpdateable>true</EnableUpdateable>     <UseMerge>true</UseMerge>     <SingleAssemblyName>MvcApplication293</SingleAssemblyName>     <DeleteAppCodeCompiledFiles>true</DeleteAppCodeCompiledFiles>     <UseWebConfigReplacement>true</UseWebConfigReplacement>     <ValidateWebConfigReplacement>true</ValidateWebConfigReplacement>     <DeleteAppDataFolder>true</DeleteAppDataFolder>   </PropertyGroup> The next section is self-explanatory.  The content merely reflects the replacement value you provided via the Property Pages. <ItemGroup Condition="'$(Configuration)|$(Platform)' == 'Release|AnyCPU'">     <WebConfigReplacementFiles Include="Config\QA\AppSettings.config">       <Section>appSettings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\Db.config">       <Section>connectionStrings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\SmtpSettings.config">       <Section>system.net/mailSettings/smtp</Section>     </WebConfigReplacementFiles>   </ItemGroup> You’ll want to extend the ItemGroup section to include the files you wish to exclude from the build.  The sample ExcludeFromBuild nodes exclude all obj, svn, csproj, user, pdb artifacts from the build. Enough though they files aren’t included in your web project, you’ll need to exclude them or they’ll show up along with required deployment artifacts.  <ItemGroup Condition="'$(Configuration)|$(Platform)' == 'Release|AnyCPU'">     <WebConfigReplacementFiles Include="Config\QA\AppSettings.config">       <Section>appSettings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\Db.config">       <Section>connectionStrings</Section>     </WebConfigReplacementFiles>     <WebConfigReplacementFiles Include="Config\QA\SmtpSettings.config">       <Section>system.net/mailSettings/smtp</Section>     </WebConfigReplacementFiles>     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\obj\**\*.*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\.svn\**\*.*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\.svn\**\*" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\*.csproj" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\**\*.user" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\bin\*.pdb" />     <ExcludeFromBuild Include="$(SourceWebPhysicalPath)\Notes.txt" />   </ItemGroup> Pre/post build and Pre/post merge tasks are added to the final code block.  By default, your project file should look like the following – a completely commented out section. <!– To modify your build process, add your task inside one of        the targets below and uncomment it. Other similar extension        points exist, see Microsoft.WebDeployment.targets.   <Target Name="BeforeBuild">   </Target>   <Target Name="BeforeMerge">   </Target>   <Target Name="AfterMerge">   </Target>   <Target Name="AfterBuild">   </Target>   –> Update the section to remove all temporary Config folders and files after the build.  <!– To modify your build process, add your task inside one of        the targets below and uncomment it. Other similar extension        points exist, see Microsoft.WebDeployment.targets.     <Target Name="BeforeMerge">   </Target>   <Target Name="AfterMerge">   </Target>     <Target Name="BeforeBuild">      </Target>       –>   <Target Name="AfterBuild">     <!– WebConfigReplacement requires the Config files. Remove after build. –>     <RemoveDir Directories="$(OutputPath)\Config" />   </Target> That’s it for setup.  Save the project file, flip the solution to Release Mode and build.  If there’s an issue, consult the Output window for details.  If all went well, you will find your deployment artifacts in your Web Deployment Project folder like so. Both the code source and published application will be there. Inside the Release folder you will find your “published files” and you’ll notice the Config folder is no where to be found.  In the Source folder, all project files are found with the exception of the items which were excluded from the build. I’ll wrap up this tutorial by calling out a little Web Deployment pet peeve of mine: there doesn’t appear to be a way to add an existing web deployment project to a solution.  The best I can come up with is create a new web deployment project and then copy and paste the contents of the existing project file into the new project file.  It’s not a big deal but it bugs me. Download the Solution

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • NetBackup-pal is muködik az Oracle Database 11gR2 mentés Exadata V2 környezetben

    - by Fekete Zoltán
    A Veritas NetBackup szoftverrel is menthetok az Oracle 11gR2 adatbázisok az Oracle Enterprise Linux-on is (RMAN-t használva), 64-bites környezetben. A dokumentumokban a Red Hat-re vonatkozó infót kell keresnünk, mivel http://seer.entsupport.symantec.com/docs/337048.htm szerint "Oracle Enterprise Linux (OEL)" Supported based on NetBackup Red Hat Enterprise Linux 4.x/5.x Client, Server, and Oracle Agent support. BMR is not supported. NetBackup compatibility listák: http://seer.entsupport.symantec.com/docs/303344.htm - A NetBackup 7 kompatibilis az Oracle Exadata V2-vel: http://seer.entsupport.symantec.com/docs/340295.htm - A NetBackup 6.x verziókra telepíteni kell a következo patch-et: NB_6.5.5_ET1940073_1_347227.zip is a NetBackup 6.5.5 EEB (Emergency Engineering Binary) for Oracle Clients. http://seer.entsupport.symantec.com/docs/347227.htm és http://support.veritas.com/docs/279048.

    Read the article

  • Robbanásszeruen no az Exadata értékesítés

    - by Fekete Zoltán
    Olvassa el a HWSW cikkét is: Úgy veszik az Exadatát mintha kötelezo lenne Az Oracle 2011-es pénzügyi évének 2. negyedévének zárásakor kiadott közleményben ezt nyilatkozta Mark Hurd, az Oracle elnöke. "Since joining Oracle I've met with and visited many customers that have expressed a high level of enthusiasm around our strategy of engineering hardware and software that works together," said Oracle President, Mark Hurd. "That enthusiasm translates into an Exadata pipeline that has now grown to nearly $2 billion. That number is a good leading indicator that customers are planning to increase their investment in Oracle technology." azaz Mark Hurdnek sok Oracle ügyfél fejezte ki magas szintu elismerését az Oracle stratégiájáért, mely az együtt tervezett és együtt múködo hardver és szoftver fejlesztését tuzte a zászlóra. Ez az ügyfél oldali elismerés vezetett oda, hogy az Exadata pipeline 2 milliárd dollárra nott. Ez a szám azt jelzi, hogy az Oracle ügyfelek terveiben növelik az Oracle technológiai befektetéseiket. Larry Ellison, az Oracle vezére a következot nyilatkozta: "Our new generation of Exadata, Exalogic and SPARC Supercluster computers deliver much better performance and much lower cost than the fastest machines from IBM and HP." azaz Az új generációs Exadata, Exalogic és SPARC Supercluster szerverek sokkal jobb teljesítményuek és lényegesen kisebb költséguek, mint az IBM vagy a HP leggyorsabb gépei. Olvassa el a HWSW cikkét is: Úgy veszik az Exadatát mintha kötelezo lenne

    Read the article

  • EPM 11.1.2 - Configure a data source to support Essbase failover in active-passive clustering mode

    - by Ahmed A
    To configure a data source to support Essbase fail-over in active-passive clustering mode, replace the Essbase Server name value with the APS URL followed by the Essbase cluster name; for example, if the APS URL is http://<hostname>:13090/aps and the Essbase cluster name is EssbaseCluster-1, then the value in the Essbase Server name field would be:http://<hostname>:13090/aps/Essbase?clusterName=EssbaseCluster-1Note: Entering the Essbase cluster name without the APS URL in the Essbase Server name field is not supported in this release.

    Read the article

  • Letölthetoek a HOUG 2010 Konferencia eloadásai

    - by Fekete Zoltán
    2010. március 22-24. között zajlott le a HOUG Konferencia 2010. Már letölthetoek az eloadás anyagok a http://www.houg.hu/ oldalról az Archívum-ra, majd a HOUG 2010-re kattintva. A konferencián készült fényképek még nem kerültek fel, de reménykedjünk, hogy kisvártatva elénk tárulnak. :) Az Üzleti intelligencia és adattárház szekció (Business Intelligence & Data Warehouse) eloadásai itt érheto el. Jó mazsolázást kívánok!

    Read the article

  • EXALYTICS - Unable to run Summary Advisor when BI Admin Client Tool is installed separately

    - by Ahmed Awan
    Unable to launch Summary Advisor when BI Admin Developer Client tool (version 11.1.1.6.0) is separately installed. In Windows Event application log, the error is pointing to missing AggrAdvisor.xml file. The file AggrAdvisor.xml is missing in BI client install location. Workaround: Download file AggrAdvisor.xml and copy to following location will resolve this issue: <your drive>:\Program Files\Oracle Business Intelligence Enterprise Edition Plus Client\oraclebi\orahome\bifoundation\server\locale\l_en\

    Read the article

  • Exadata a kiskereskedelem (retail) számára

    - by Fekete Zoltán
    Egyik kedvenc blogomban a Rittman Mead honlapján hasznos eloadásra jelent meg infó és letöltési lehetoség. (Lásd a jobb oldali Top Tags dobozban a "blog" kulcsszót, és a legalsó bejegyzést.) Az eloadás címe: Exadata in the Retail Sector, azaz Exadata a felhasználása a kiskereskedelemben. Ezt az eloadást Jon Mead tartotta 2010. március 23-án Londonban az Exadata V2, Oracle Extreme Performance Data Warehousing Seminar rendezvényen. Mint láthatjuk, szinte minden gyümölcsrol beszéltek az Oracle adattárház és üzleti intelligencia virágzó gyümölcsökertjébol az Oracle BI, 11gR2 adattárház tulajdonságai és más témákban. Az eloadások a következo területekrol szóltak: - Exadata techikai ismertetés - ügyfél sztorik: LGR, Allegro, és nagy-britannia egyik legnagyobb online elektronikai kiskereskedelmi cége - Oracle BI - GoldenGate (adatreplikáció) - advanced compression (tranzakciós adatok tömörítése) - particionálás - OLAP - adatbányászat, Oracle Data Mining

    Read the article

  • Reasons to Use a VM For Development

    - by George Stocker
    Background: I work at a start-up company, where one team uses Virtual Machines to connect to a remote server to do their development, and another team (the team I'm on) uses local IIS/SQL Server 2005/Visual Studio installations to conduct work. Team VM is located about 1000 miles from Team Non-VM, and the servers the VMs run off of are located near Team VM (Latency, for those that are wondering, is about 50ms). A person high in the company is pushing for Team Non-VM to use virtual machines for programming, development, and testing. The latter point we agree on -- we want Virtual Machines to test configurations and various aspects of the web application in a 'clean' state. The Problem: What we don't agree on is having developers using RDP to connect to a desktop remotely that contains Visual Studio, SQL Server, and IIS to do the same development we could do locally on our laptops. I've tried the VM set-up, and besides the color issue, there is a latency issue that is rather noticeable, not to mention that since we're a start-up, a good number of employees work from home on occasion with our work laptops, and this move would cut off the laptops. They'd be turned in. Reasons to Use Remote VMs for Development (Not Testing!): Here are the stated reasons that this person wants us to use VMs: They work for TeamVM. They keep the source code "safe". If we want to work from home, we could just use our home PCs. Licenses (I don't know what the argument is, only that it's been used). Reasons not to use Remote VMs for Development: Here are the stated reasons why we don't want to use VMs: We like working from home. We get a lot done on our own time. We're not going to use our Home PCs to do work related stuff. The Latency is noticeable. Support for the VMs (if they go down, or if we need a new VM) takes a while. We don't have administrative privileges on the VM, and are unable to change settings as needed. What I'm looking for from the community is this: What reasons would you give for not using VMs for development? Keep in mind these are remote VMs -- this isn't a VM running on a local desktop. It's using the laptop (or a desktop) as a thin client for a remote VM. Also, on the other side of the coin: Is there something we're missing that makes VMs more palatable for development? Edit: I think 'safe' is used in term of corporate espionage, or more correctly if the Laptop gets stolen, the person who stole would have access to our source code. The former (as we've pointed out, is always going to be a possibility -- companies stop that with litigation, there isn't a technical solution (so far as I can see)). The latter point is ( though I don't know its usefulness in a corporate scenario) mitigated by Truecrypt'ing the entire volume.

    Read the article

  • What happened to VM based deployments?

    - by user128670
    Watched some MountainWest RubyConf 2014 talks and noticed an interesting theme. Many dynamic programming environments back in the old days used to be self-contained VM images, e.g. SmallTalk, GemStone/S. One could checkpoint, modify, and ship these images wholesale and have it up and running with very little effort. Fast forward to now and I'm still using Make files to configure and install binaries. What happened?

    Read the article

  • Thoughts on Development using Virtual Machines

    - by J_A_X
    I'll be working as a development lead for a startup and I've suggested that we use VMs for development. I'm not talking about each developer having a desktop with VMs for testing/development, I mean having a server rack where all VMs are managed and have the developers work from a microPC (ChromeOS anyone?) locally, or even remotely from their home computer. To me, the benefits are the fact that it's extremely scalable, cheaper in the long run, easier to manage and that we utilize the hardware its maximum potential. As for cons, I can't think of any particular showstoppers other than we'll need someone to setup/maintain said setup. I was hoping that some of you might of had a similar setup at your place of employment and be able to weight in with your opinions. Thanks.

    Read the article

  • Java IDE written in pure Java?

    - by Darestium
    Is there a Java IDE written in Java? I just got my year 9 DET laptop today at school, and there are all sorts of restrictions set in place. Somewhat annoyingly, you cannot run any executable other than the ones already installed on the system (for some reason they haven't disabled the use of Command Prompt, PowerShell, or strangely enough, regedit). They allow you to run Java executables, so I thought that would be the only way to be able to program on my crappy laptop at school (when I have finished all my work, naturally) :D Edit: By written in Java, I also mean that the executable, that is used to run the program, has the file extension ".jar", thus running on the JVM. Edit 2: I tried the DrJava IDE, and it worked great, thanks (I can compile and execute programs)! Regarding running Eclipse as through the command line using the command "java -jar "C:/Users.../org.eclipse..."". This results in an error producing a log saying file, the main error is: MESSAGE An error occurred while automatically activating bundle org.eclipse.ui.workbench (182). How do I fix this error (I much perfer working with Eclipse than any other IDE)? Edit 3: Regarding my last edit, just disregard it :D. I fixed the problem by downloading the latest version of Eclipse.

    Read the article

  • Launch a real install of Ubuntu already on another hard-drive in Windows 7 like a VM

    - by Chad M
    I'm not too familiar with VMs and the like so this may not even be possible. Here is what I have: A real, full install of Windows 7 on hard drive A. A real, full install of Ubuntu 10.04 on hard drive B. Grub allowing me to select what I want to launch when I start up my computer. It would be Amazing if I could do one of two things. Within Windows 7, launch my real install of ubuntu as if it were a VM. That means i would get all the installed software, all of the files, and all of the settings. Launch a VM copy of ubuntu 10.04 but some how make it use all of the installed software and settings from my real copy. Thanks!

    Read the article

  • Issue in understanding how to compare performance of classifier using ROC

    - by user1214586
    I am trying to demystify pattern recognition techniques and understood few of them. I am trying to design a classifier M. A gesture is classified based on the hamming distance between the sample time series y and the training time series x. The result of the classifier are probabilistic values. There are 3 classes/categories with labels A,B,C which classifies hand gestures where there are 100 samples for each class which are to be classified (single feature and data length=100). The data are different time series (x coordinate vs time). The training set is used to assign probabilities indicating which gesture has occured how many times. So,out of 10 training samples if gesture A appeared 6 times then probability that a gesture falls under category A is P(A)=0.6 similarly P(B)=0.3 and P(C)=0.1 Now, I am trying to compare the performance of this classifier with Bayes classifier, K-NN, Principal component analysis (PCA) and Neural Network. On what basis,parameter and method should I do it if I consider ROC or cross validate since the features for my classifier are the probabilistic values for the ROC plot hence what shall be the features for k-nn,bayes classification and PCA? Is there a code for it which will be useful. What should be the value of k is there are 3 classes of gestures? Please help. I am in a fix.

    Read the article

  • Installing Solaris Studio 12.2 on Ubuntu 10.04

    - by KronoS
    I'm having a dickens of a time installing Solaris Studio 12.2 on Ubuntu 10.04. I found this guide, however using the alien option isn't finding the correct files. I'm not exactly sure on the syntax of alien, its kinda alien to me. (sorry for the bad pun) Also, when I download the tar file, and extract it, there are errors everytime saying things like: "operation not permitted" cannot creat symlink to '../prod/bin/cc': Operation not permitted I've extracted with super user access, but to no avail. Any success from anyone else?

    Read the article

  • how to process document state transition?

    - by brick
    Imagine there is an application (ASP.NET MVC) that processes some documents. The document must be revised several times by different group of users. state/role rules: simple user can only publish document; (priority: low) userGroup1 can switch it to next state or reject it; (priority: higher) userGroup2 can confirm previous state and switch it to next gradual state or reject it; (priority: highest) How to implement such a workflow in ASP.NET MVC? How to impelement UI, views so that group with lower priority can both visually/technically perform only allowed transitions? Can I somehow extend that system: link? Do I need extras like service bus, event sourcing for that?

    Read the article

  • Modern.IE VM license

    - by Thomas W.
    Microsoft provides some VMs for testing purposes (advertised on StackOverflow) and I'm trying to understand the license terms. The one I don't really understand is 1.b. You may use the software for testing purposes only. You may not use the software for commercial purposes. My thoughts: a) Testing a website in several browsers on several different virtual machines seems a quite professional approach. I hardly believe many private developers would do that. Of course they should, but which private developer has the time to do so? b) If that's really only available to private developers, what is the offer to companies doing the same thing? I am missing the advertisement for a paid service. My question as a non-native English speaker: Is testing by a company considered as a commercial purpose? Can I use the VMs within a company for testing or not?

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • How to install Ubuntu over http in virtual manager?

    - by Bond
    Hi, I am having a situation where I can not use a CD or PxE boot or wubi to install.I need to necessarily do an http install of Ubuntu.I am basically trying to create a guest OS in a virtualization setup on Xen on a non VT hardware. On a non VT hardware the virt-manager does not allow to install from local ISO or PXE even the only option is via a URL on http:// Here is what I did: 1) Download ubuntu 10.04 32 bit ISO 2) Kept it in /var/www (apache2 is running) 3) renamed it to ubuntu.iso and when I reached a stage where installation begins I gave path hxxp://localhost/ubuntu.iso but I got an error any installable distribution not found. 4) After this I did mkdir /var/www/sk mount -t iso9660 /var/www/ubuntu.iso /var/www/sk -o loop and this time during the installation I gave path http://localhost/sk I was able to see the contents in browser http://localhost/sk which you will see in a normal CD. But beginning installation I got same error ValueError: Could not find an installable distribution at 'http://localhost/sk So I want to just confirm if http install is done only this way or some other way because the installation is not proceeding.

    Read the article

  • Why does F. Wagner consider "NOT (AI_LARGER_THAN_8.1)" to be ambiguous?

    - by oosterwal
    In his article on Virtual Environments (a part of his VFSM specification method) Ferdinand Wagner describes some new ways of thinking about Boolean Algebra as a software design tool. On page 4 of this PDF article, when describing operators in his system he says this: Control statements need Boolean values. Hence, the names must be used to produce Boolean results. To achieve this we want to combine them together using Boolean operators. There is nothing wrong with usage of AND and OR operators with their Boolean meaning. For instance, we may write: DI_ON OR AI_LARGER_THAN_8.1 AND TIMER_OVER to express the control situation: digital input is on or analog input is larger than 8.1 and timer is over. We cannot use the NOT operator, because the result of the Boolean negation makes sense only for true Boolean values. The result of, for instance, NOT (AI_LARGER_THAN_8.1) would be ambiguous. If "AI_LARGER_THAN_8.1" is acceptable, why would he consider "NOT (AI_LARGER_THAN_8.1)" to be ambiguous?

    Read the article

  • What's the relationship between meta-circular interpreters, virtual machines and increased performance?

    - by Gomi
    I've read about meta-circular interpreters on the web (including SICP) and I've looked into the code of some implementations (such as PyPy and Narcissus). I've read quite a bit about two languages which made great use of metacircular evaluation, Lisp and Smalltalk. As far as I understood Lisp was the first self-hosting compiler and Smalltalk had the first "true" JIT implementation. One thing I've not fully understood is how can those interpreters/compilers achieve so good performance or, in other words, why is PyPy faster than CPython? Is it because of reflection? And also, my Smalltalk research led me to believe that there's a relationship between JIT, virtual machines and reflection. Virtual Machines such as the JVM and CLR allow a great deal of type introspection and I believe they make great use it in Just-in-Time (and AOT, I suppose?) compilation. But as far as I know, Virtual Machines are kind of like CPUs, in that they have a basic instruction set. Are Virtual Machines efficient because they include type and reference information, which would allow language-agnostic reflection? I ask this because many both interpreted and compiled languages are now using bytecode as a target (LLVM, Parrot, YARV, CPython) and traditional VMs like JVM and CLR have gained incredible boosts in performance. I've been told that it's about JIT, but as far as I know JIT is nothing new since Smalltalk and Sun's own Self have been doing it before Java. I don't remember VMs performing particularly well in the past, there weren't many non-academic ones outside of JVM and .NET and their performance was definitely not as good as it is now (I wish I could source this claim but I speak from personal experience). Then all of a sudden, in the late 2000s something changed and a lot of VMs started to pop up even for established languages, and with very good performance. Was something discovered about the JIT implementation that allowed pretty much every modern VM to skyrocket in performance? A paper or a book maybe?

    Read the article

  • How do you go about understanding the source code of an Open source project?

    - by Anirudh Vemula
    I am planning to contribute code through patches to some open source organisations to become more aware of open source development. I have chosen some organisations but when I download their source code, I don't seem to understand even a bit of it. How do I go about understanding their source code? I tried going through resolving a bug but finding the place in the source code where the bug is present is also difficult when you have no idea about how the code is structured and implemented. I need help on this so I can start working on an open source code.

    Read the article

  • What is the advantage of a programmers VM apart from portability

    - by user619818
    I can understand the benefits of Java running on a JVM. Portability. Nice simple reason. But I have always been puzzled as to why Microsoft brought out their own version of a JVM - .NET. C# is supposed to be a fine language (haven't used myself) but could Microsoft have launched product to use native. ie to generate an exe? My colleague is learning F#. The reason it has to be a language which runs on .NET is because the Microsoft Lync API which will be used is only available on .NET. ie there is no C API for Lync. A cynical view may be that the reason is vendor lockin. F# will only run on a Microsoft platform (or C# for that matter) and so program is locked in. But maybe I am missing some other benefit of a VM platform?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >