Search Results

Search found 17140 results on 686 pages for 'records management'.

Page 295/686 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • A Better Way to Plan, Execute and Manage Enterprise Architecture

    - by JuergenKress
    IT Strategies from Oracle is an authorized library of guidelines and reference architectures that will help you better plan, execute, and manage your enterprise architecture and IT initiatives. The IT Strategies from Oracle library offers two types of best practice documents: practitioner guides containing pragmatic advice and approaches, and reference architectures containing the proven technology patterns to jumpstart your initiative. The IT Strategies from Oracle library can help you establish a reliable set of principles and standards to guide your use of Oracle technology. We will expand this library over time across all of Oracle's technologies. Today, you can access: Overview documents providing an introduction to all the resources available in the library and best practices maturity models Oracle Reference Architectures covering the application infrastructure foundation, management and monitoring, security, software engineering, service-oriented integration, service orientation, user interaction, engineered systems, and a master glossary. Enterprise Technology Strategies for Service-Oriented Architecture offering practitioner guides on creating a SOA roadmap, frameworks for governance, determining ROI, identifying services, software engineering, and white papers. Enterprise Technology Strategies for Event-Driven Architecture offering practitioner guides on creating an EDA roadmap and reference architectures on an EDA foundation and EDA infrastructure. Enterprise Technology Strategies for Business Process Management including practitioner guides on creating a BPM roadmap, business process engineering, governance, and reference architectures on a BPM foundation and BPM infrastructure. Enterprise Technology Strategies for Cloud Computing including reference architectures on a Cloud foundation and Cloud infrastructure. Enterprise Technology Strategies for Business Analytics includes a practitioner guide for creating a BA roadmap, and reference architectures for a BA foundation and BA infrastructure. Get the Oracle Enterprise Architecture content here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Architecture,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Are scheduled job servers the right choice for a time sensitive game engine?

    - by maple_shaft
    I am currently architecting and designing an exciting new web application that will be entering into some areas that I have very little experience in, game development. The application is not necessarily a game, but there are some very time sensitive tasks and scheduled jobs that a server will need to run to perform game related activities (Eg. New match up starts at noon every day for a 12 day tournament, updating scoreboards at 5pm every day, etc...) In the past I have typically used cron jobs with the Quartz Scheduler running within a web application server, but I know that this isn't likely a scalable solution for the truly massive userbase that management is telling me to expect (Granted they are management and are probably highly optimistic about this) and also for how important the role of these tasks are in this web application. The other important thing I want to consider is that I want to avoid SPOF (Single Point Of Failure). If the primary job server goes down, another job server should be able to successfully run the job in its place. I suppose this can be done appropriately record locking and database transactions. My question is if scheduled jobs like CRON running on a web application server are a wise design choice given the time sensitive game tasks of this application, or is there something more appropriate for running a scalable game engine parallel to the web application servers?

    Read the article

  • Information I need to know as a Java Developer [on hold]

    - by Woy
    I'm a java developer. I'm trying to get more knowledge to become a better programmer. I've listed a number of technologies to learn. Instead of what I've listed, what technologies would you suggest to learn as well for a Junior Java Developer? I realize, there's a lot of things to study. Java: - how a garbage collector works - resource management - network programming - TCP/IP HTTP - transactions, - consistency: interfaces, classes collections, hash codes, algorithms, comp. complexity concurrent programming: synchronizing, semafores steam management metability: thread-safety byte code manipulations, reflections, Aspect-Oriented Programming as base to understand frameworks such as Spring etc. Web stack: servlets, filters, socket programming Libraries: JDK, GWT, Apache Commons, Joda-Time, Dependency Injections: Spring, Nano Tools: IDE: very good knowledge - debugger - profiler - web analyzers: Wireshark, firebugs - unit testing SQL/Databases: Basics SELECTing columns from a table Aggregates Part 1: COUNT, SUM, MAX/MIN Aggregates Part 2: DISTINCT, GROUP BY, HAVING + Intermediate JOINs, ANSI-89 and ANSI-92 syntax + UNION vs UNION ALL x NULL handling: COALESCE & Native NULL handling Subqueries: IN, EXISTS, and inline views Subqueries: Correlated ITH syntax: Subquery Factoring/CTE Views Advanced Topics Functions, Stored Procedures, Packages Pivoting data: CASE & PIVOT syntax Hierarchical Queries Cursors: Implicit and Explicit Triggers Dynamic SQL Materialized Views Query Optimization: Indexes Query Optimization: Explain Plans Query Optimization: Profiling Data Modelling: Normal Forms, 1 through 3 Data Modelling: Primary & Foreign Keys Data Modelling: Table Constraints Data Modelling: Link/Corrollary Tables Full Text Searching XML Isolation Levels Entity Relationship Diagrams (ERDs), Logical and Physical Transactions: COMMIT, ROLLBACK, Error Handling

    Read the article

  • SPARC SuperCluster Papers

    - by user12616590
    Oracle has been publishing white papers that describe uses and characteristics of the SPARC SuperCluster product. Here are just a few: A Technical Overview of the Oracle SPARC SuperCluster T4-4SPARC SuperCluster T4-4 is a high performance, multi-purpose engineered system that has been designed, tested and integrated to run a wide array of enterprise applications. It is well suited for multi-tier enterprise applications with Web, database and application components. This 20-page paper discusses the components and technical characteristics of this product. SPARC SuperCluster T4-4 Platform Security Principles and CapabilitiesThe security capabilities designed into the SPARC SuperCluster, and architectural, deployment, and operational best practices for taking advantage of them. Consolidating Oracle E-Business Suite on Oracle’s SPARC SuperClusterThis Oracle Optimized Solution describes the implementation and use of SPARC SuperCluster as a consolidation platform for E-Business Suite in 30 pages. Oracle Optimized Solution for Oracle PeopleSoft Human Capital Management on SPARC SuperClusterThe Oracle Optimized Solution for PeopleSoft Human Capital Management on SPARC SuperCluster is the industry's only proven, tested, applications-to-disk solution that maintains excellence managing absences, optimizing collaborative activities, streamlining knowledge and honing processes; 31 pages. I hope you find some of those papers useful.

    Read the article

  • Looking to Implement/Upgrade Your MDM Solution? OOW Has the Session For You

    - by Mala Narasimharajan
    By Bala Mahalingam  Hurray!  Oracle Open World next week.  Oh my God!  I need to plan my calendar for MDM focused sessions. The implementation/upgrade of Oracle Master Data Management solution is an art & science combined. This year at Open World, we have a dedicated session focused on sharing two great implementation stories of Oracle Customer Hub. Also hear from Oracle on the implementation/upgrade approach and methodology for Oracle Master Data Management and Data Quality applications. Here are some of the questions that you might be thinking around the implementation of Oracle MDM solution. If you are in the process of implementation / upgrade or evaluating the options for implementation of MDM solution and you would like to hear directly from T-Mobile and Sony on their roadmap and implementation experience, then I would highly recommend this session.     Hope to see you at Oracle Open World 2012 and stay in touch via our future blogs. Look here for a list of all the MDM sessions at OpenWorld.

    Read the article

  • Work @ Java shop. Tasked to redo intranet. I only know PHP - CTO says use that; Ops Says no - we have JSP, use that - Thoughts?

    - by Mackysback
    So I work as a project manager and was given the opportunity to redo the intranet and Internet site... I'd love to , great addition to my resume. However.... I am not familiar with java nor JSP... I am fairly proficient with PHP and MySQL ... Also my intention was to use a CMS that I have experience with, wordpress, drupal or joomla... The site would be very simple so I was thinking wordpress would be fine for this. I told management that I would need it to run on PHP and they gave me their blessing... Now the disconnect bw it and management ... I spoke with ops and they told me that we have java and JSP for that purpose although the IT guy did say "oh but I do know that php and mysql are gaining popularity" That said ... I do not understand much as far as systems architecture... We do self host and run websphere with IIS and jboss with tomcat. Any advice or suggestions? Thanks ! Is my request that unfeasible?

    Read the article

  • Enablement 2.0 Get Specialized!

    - by mseika
    Enablement 2.0 Get Specialized! The Oracle PartnerNetwork Specialized program is releasing new certifications on our latest products, and partners are invited to be the first candidates to get certified. Oracle's Certified Exams go through a rigorous review process called a "beta period". Here are a few advantages of taking a Beta Exam: Certification exams taken during the beta period count towards company Specializations. Most new Certified Specialist Exams have no training requirement. Beta Exams Vouchers are available in limited quantity, so request a voucher today by contacting the Partner Enablement Team and act fast to reserve your test from the list below. FREE Certification Testing Are you attending OPN Exchange @ OpenWorld? Then join us at OPN Specialist Test Fest! October 1st - 4th 2012, Marriott Marquis Hotel Pre-register now! Beta testing period will end on October, 6th, 2012 for the following exams: Oracle E-Business Suite R12 Project Essentials (1Z1-511) Beta testing period will end on October, 13th, 2012 for the following exams: Oracle Hyperion Data Relationship Management Essentials (1Z1-588) Beta testing period will end on November, 17th, 2012 for the following exams: Oracle Global Trade Management 6 Essentials (1Z1-589) Exams Coming Soon in Beta Oracle Fusion Distributed Order Orchestration Essentials Exam (1Z1-469) Take the exam(s) now at a near-by Pearson VUE testing center! Contact Us Please direct any inquiries you may have to the Oracle Partner Enablement team at [email protected] For More Information Oracle Certification Program Beta Exams OPN Certified Specialist Exam Study Guides OPN Certified Specialist FAQ

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 1

    - by Michelle Kimihira
    Author: Moazzam Chaudry This year we recognized 29 customers for their innovative use of Oracle Fusion Middleware and their significant results. The winners were selected across 8 product categories from 11 countries spanning diverse industries around the world. This is a two-part blog series. The 2012 Fusion Middleware Innovation Awards winners were announced at OOW on October 2nd by Hasan Rizvi (EVP Fusion Middleware and Java development), Amit Zavery (VP Product Management) and Ed Zou (VP Product Management) to an audience that included press, analysts and customers. Winners were selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. The program is in its 6th year and this year, we are excited to have received over 250 submissions from customers around the globe. The winners were selected by a panel of internal and external judges; it was a difficult time selecting this year's most innovative projects. Judges scored each entry across multiple scoring categories. This year, winning use cases for Fusion Middleware include: Improve customer experience by monitoring real-time and simplifying user experience of tens of millions of customer Drive social enagement through social media channels in fields, including healthcare, harness big data by analyzing and improving visibility across 60M+customers and hundreds of terabytes of data Enable mobile adoption by delivering mobile news experience to 50% of the Australian population, embrace cloud computing by delivering hospitality services to 3000+ hotels and monitoring services to hospitals, and optimize criticial processes such as, remarketing cars through tens of thousands of dealers On Monday's blog, we will talk about the winners in each category and what customers had to say in the customer panel. Congratulations to the 2012 Oracle Fusion Innovation Award winners:  

    Read the article

  • How to prevent computer from automatically sleeping and/or hibernating?

    - by mehaase
    I'm running Ubuntu 12.04, and my laptop* won't wake from sleep/suspend/hibernate. (Is sleep the same thing as suspend?) I'm not even sure which of these things it's doing. When I am done working for the day, I lock my screen (Control-Alt-L). When I come back the next day, the screen is in power saving mode, and no amount of typing or clicking (on the usb keyboard/mouse or the builtin keyboard/trackpad) nor tapping the power button will bring it back to life. The only way I can get my machine to work is to hold down the power button until it shuts off, then press the power button again to turn it back on. Obviously, anything I had open from the previous day is pretty much gone -- in particular, my VMs all get rudely shut down without any warning. This is driving me INSANE. I spend the first hour of every work day trying to figure out how to get my computer to stop locking up over night. What I've tried: Editing the org.freedesktop.upower.policy to disable suspend and hibernate. Setting power management options in "Power" section of "System Settings". Looking at all power management options in the BIOS (none appear to be relevant to sleep/suspend/hibernate). Reading every forum post/askubuntu post that I can find that's even tangentially related to the subject. My question: how to disable the automatic sleep and/or hibernate (and/or anything similar) in Ubuntu 12.04. I don't care if it's still possible to sleep/suspend/hibernate/whatever by pushing buttons or running some command or reciting led zeppelin lyrics backwards. I just want my laptop to be ready for work in the morning. *The laptop is a Dell Latitude something or other. I don't want to get too specific because I've seen a lot of similar questions get closed for being too specific. I think my question is generic enough to stand -- it's a question about the latest, stable version of Ubuntu.

    Read the article

  • Growing your VirtualBox Virtual Disk

    - by Fat Bloke
    Don't you just hate it when this happens: Fortunately, if you're running inside VirtualBox, you can resize your virtual disk and magically make your guest have a bigger disk very easily. There are 2 steps to doing this... 1. Resize the virtual disk Use the VBoxManage command line tool to extend the size of the Virtual Disk, specifying the path to the disk and the size in MB: VBoxManage modifyhd <uuid>|<filename> [--type normal|writethrough|immutable|shareable| readonly|multiattach] [--autoreset on|off] [--compact] [--resize <megabytes>|--resizebyte <bytes>]   If you booted up your guest at this point, the extra space is seen as an unformatted area on the disk, like this: So we now need to tell the guest about the extra space available. 2. Extend the guest's partition to use the extra space How you do this step depends on you guest OS type and the tools you have available. Linux guests often include the excellent gparted partition editor, whereas Windows 7 and 8 provide the Computer Management tool which can resize partitions. Unfortunately, my Windows XP vm has no such tool. But I do have a couple of other options: Most Linux installable .isos include the aforementioned gparted tool, so I could simply attach, say, an Ubuntu.iso as a Virtual CD/DVD in my Windows XP vm and boot off that. Then use gparted to extend the Windows XP partition, before finally rebooting. But I took another route and attached my resized virtual disk to a Windows Server 2012 vm I had lying around. Then I used the Computer Management tool in Windows Server 2012 to extend the partition of the Windows XP disk, before shutting down, unplugging the disk and reattaching to my Windows XP vm. (Note that if your vm's use different disk controllers, Windows will check the disks on booting). When I finally boot up my Windows XP guest I see the available disk space and all is well. At least until the next time - FB 

    Read the article

  • Oracle UCM /ECM ??????????

    - by ???
    ???OracleUCM? Oracle UCM ,Universal Content Management. ???Stellent Universal Content Management. ????????????????????????,??,????,??,???? ? ?????????,?????,????? ??,??,??,?? ??????????????????????????????????????????? ????? ???? ??WEB??? ??/?? ????? ?? ???(????,??) ?? ?? ???? ?? ????     ??????????? .  ???????????,???????,??,??,CAD??,???? ??????????????????,????,?????????????    ???????????,???????,???,???????? ??????UCM?       

    Read the article

  • How do I sell Oracle? It all starts with the database

    - by swiesner
    Partner sales reps often ask me how they start a conversation with their customers around Oracle. The first question should be, "are you running an Oracle database?" Much of what we do at Oracle is intended to optimize our customers' investments in the Oracle database. Many of our acquisitions, new features in existing products, new applications, are often designed to improve the experience of the 400,000 Oracle database customers worldwide. Once you find an Oracle database customer, the next set of questions are much easier, but depend on your expertise: License Management, Upsell, and Services. Let's start with License Management: Have there been any changes to your infrastructure since you purchased the database licenses? Have you upgraded the servers on which the Oracle database is running? Have the number of users or employees increased since the last license purchase? Yes to any of these questions will lead to investigating correct licensing. The goal is to provide a "soft" license review. Oracle generally does not require any license keys to install our software, so we need to help our customers with compliance. Correct licensing is essential to managing costs, and can provide a great way to efficiently manage IT spend. You might want to contact your local Oracle Channel Manager or VAD for licensing assistance. Or, review the Software Investment Guide. Naturally, these questions can lead to upsell opportunities. If your customer has invested in Oracle technology to manage their data, that data is essential to running their business. We'll take a look at those upsell questions in a later blog post.

    Read the article

  • ????????????????????WebLogic Server 12c????

    - by ???02
    ??????Java EE 6???????????WebLogic Server 12c????????WebLogic Server 12c???/?????????????????????????????WebLogic Server 12c????????·??????·?????????????????????????/??????????????????????????????????(???) Web????????????????????Web?????·??????? WebLogic Server 12c???????????????????????????? Fusion Middleware?????? ????????? Cloud Application Foundation???????? ???????????????????? 24??365??????????????????IT?????????????????????????????????????????????????????????Oracle Real Application Cluster(RAC)???Oracle RAC?????????Oracle Database???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ??Oracle RAC???????????????WebLogic Server????????????Active GridLink(GridLink??????)???Active GridLink?????????????????????????????WebLogic Server????????????Oracle RAC????????????Active GridLink ?Oracle RAC?????????????????????????????????????·?????????????XA?????????RAC??????·???????????????????? ??Active GridLink??WebLogic Server 12c?????????????Web??????RAC??????·??????(Web?????·??????)???? Oracle RAC?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????·???????????????????·?????????????? ???Web?????·?????????????????????????????HTTP?????????????????????????RAC???????????????????????????????????????????????????Oracle RAC??????????????????????????????????????????? ??Web?????·??????????????????????????????????????????????????????????????????????????????????????????????????????Web??????????????????????????????????????????????????????????Web?????·???????????????????????????? ????????·???????????????JDBC TLOG????????????? ???????WebLogic Server 12c?????????????????????????·??(TLOG)?Oracle Database??????JDBC TLOG???????? ????????????·???????????????????????????????????????????????2????·??????????????????????????????·?????????????????????????????????(????1)???????????·????????????????????????(????2)???????1??????????????????????????????????????????????????????????????????????????????TLOG??????????????????????????????????????????????? ???WebLogic Server?????TLOG?????·???????????????12c??Oracle Database?????????????????????????????????????·????JMS?????????TLOG????????????·????????????????????????????????????????? ??????????????????????????WebLogic Server???????????????????????????????????????????????TLOG?????·??????????????????????????????TLOG?????????????????????·??????????????????????????????????????????????????????????????????????????????????? Oracle Database???????????????????????????????????Oracle Data Guard?????????????????·????TLOG??????????????????????????????????????????????????????????????? ?????????·????????TLOG??????????·?????????????????????????????????????????????????????Oracle Data Guard?TLOG?????????????????????????????????????????????????????????TLOG??????????????????Oracle Data Guard?????????????? HTTP???????????????RESTful Management Services? ???????????????1????????????RESTful Management Services??????HTTP??????WebLogic Server 12c???????????????????????????????HTML?JSON?XML???????????? ????????TCP?80?????????????????????????????????????????????????PC???????????WebLogic Server????????????????????????????????????????????????????????????HTTP????????????????????????????????????????? ?????????????2??????WebLogic Server 12c??????????????????WebLogic Server 12c??Java EE 6/Java SE 7???????????????????????????????????????????????????????????????????

    Read the article

  • SMO ConnectionContext.StatementTimeout setting is ignored

    - by Woody
    I am successfully using Powershell with SMO to backup most databases. However, I have several large databases in which I receive a "timeout" error "System.Data.SqlClient.SqlException: Timeout expired". The timout consistently occurs at 10 minutes. I have tried setting ConnectionContext.StatementTimeout to 0, 6000, and to [System.Int32]::MaxValue. The setting made no difference. I have found a number of Google references which indicate setting it to 0 makes it unlimited. No matter what I try, the timeouts consistently occur at 10 minutes. I even set Remote Query Timeout on the server to 0 (via Studio Manager) to no avail. Below is my SMO connection where I set the time out and the actual backup function. Further below is the output from my script. UPDATE Interestingly enough, I wrote the backup function in C# using VS 2008 and the timeout override does work within that environment. I am in the process of incorporating that C# process into my Powershell Script until I can find out why the timeout override does not work with just Powershell. This is extremely annoying! function New-SMOconnection { Param ($server, $ApplicationName= "PowerShell SMO", [int]$StatementTimeout = 0 ) # Write-Debug "Function: New-SMOconnection $server $connectionname $commandtimeout" if (test-path variable:\conn) { $conn.connectioncontext.disconnect() } else { $conn = New-Object('Microsoft.SqlServer.Management.Smo.Server') $server } $conn.connectioncontext.applicationName = $applicationName $conn.ConnectionContext.StatementTimeout = $StatementTimeout $conn.connectioncontext.Connect() $conn } $smo = New-SMOConnection -server $server if ($smo.connectioncontext.isopen -eq $false) { Throw "Could not connect to server $($server)." } Function Backup-Database { Param([string]$dbname) $db = $smo.Databases.get_Item($dbname) if (!$db) {"Database $dbname was not found"; Return} $sqldir = $smo.Settings.BackupDirectory + "\$($smo.name -replace ("\\", "$"))" $s = ($server.Split('\'))[0] $basedir = "\\$s\" + $($sqldir -replace (":", "$")) $dt = get-date -format yyyyMMdd-HHmmss $dbbk = new-object ('Microsoft.SqlServer.Management.Smo.Backup') $dbbk.Action = 'Database' $dbbk.BackupSetDescription = "Full backup of " + $dbname $dbbk.BackupSetName = $dbname + " Backup" $dbbk.Database = $dbname $dbbk.MediaDescription = "Disk" $target = "$basedir\$dbname\FULL" if (-not(Test-Path $target)) { New-Item $target -ItemType directory | Out-Null} $device = "$sqldir\$dbname\FULL\" + $($server -replace("\\", "$")) + "_" + $dbname + "_FULL_" + $dt + ".bak" $dbbk.Devices.AddDevice($device, 'File') $dbbk.Initialize = $True $dbbk.Incremental = $false $dbbk.LogTruncation = [Microsoft.SqlServer.Management.Smo.BackupTruncateLogType]::Truncate If (!$copyonly) { If ($kill) {$smo.KillAllProcesses($dbname)} $dbbk.SqlBackupAsync($server) } $dbbk } Started SQL backups for server LCFSQLxxx\SQLxxx at 05/06/2010 15:33:16 Statement TimeOut value set to 0. DatabaseName : OperationsManagerDW StartBackupTime : 5/6/2010 3:33:16 PM EndBackupTime : 5/6/2010 3:43:17 PM StartCopyTime : 1/1/0001 12:00:00 AM EndCopyTime : 1/1/0001 12:00:00 AM CopiedFiles : Status : Failed ErrorMessage : System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The backup or restore was aborted. 10 percent processed. 20 percent processed. 30 percent processed. 40 percent processed. 50 percent processed. 60 percent processed. 70 percent processed. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) Ended backups at 05/06/2010 15:43:23

    Read the article

  • Ejb 2.0 deployment issues on Jboss 5.1

    - by Ravi
    I am deploying an ear application on Jboss 5.1.0. and i facing some issues. I had two ears one i had copied to deploy folder and the other in deploy-hasingleton. The ear which is in deploy-hasingleton is throwing some errors.when i serached in google i came to know that there is some issue with EJB 2.x on jboss 5.1.i was not able to find the solution. Below is the log. profileservice-secured.jar 11:16:17,162 INFO [JBossASKernel] installing bean: jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3 11:16:17,162 INFO [JBossASKernel] with dependencies: 11:16:17,162 INFO [JBossASKernel] and demands: 11:16:17,162 INFO [JBossASKernel] jboss.ejb:service=EJBTimerService 11:16:17,162 INFO [JBossASKernel] and supplies: 11:16:17,162 INFO [JBossASKernel] jndi:SecureManagementView/remote-org.jboss.deployers.spi.management.ManagementView 11:16:17,162 INFO [JBossASKernel] Class:org.jboss.deployers.spi.management.ManagementView 11:16:17,162 INFO [JBossASKernel] jndi:SecureManagementView/remote 11:16:17,162 INFO [JBossASKernel] Added bean(jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3) to KernelDeployment of: profileservice-secured.jar 11:16:17,162 INFO [EJB3EndpointDeployer] Deploy AbstractBeanMetaData@17cabbb{name=jboss.j2ee:jar=profileservice-secured.jar,name=SecureProfileService,service=EJB3_endpoint bean=org.jboss.ejb3.endpoint.deployers.impl.EndpointImpl properties=[container] constructor=null autowireCandidate=true} 11:16:17,162 INFO [EJB3EndpointDeployer] Deploy AbstractBeanMetaData@1fedd5c{name=jboss.j2ee:jar=profileservice-secured.jar,name=SecureDeploymentManager,service=EJB3_endpoint bean=org.jboss.ejb3.endpoint.deployers.impl.EndpointImpl properties=[container] constructor=null autowireCandidate=true} 11:16:17,162 INFO [EJB3EndpointDeployer] Deploy AbstractBeanMetaData@1ef4b31{name=jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3_endpoint bean=org.jboss.ejb3.endpoint.deployers.impl.EndpointImpl properties=[container] constructor=null autowireCandidate=true} 11:16:17,833 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureDeploymentManager,service=EJB3 11:16:17,833 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureDeploymentManager ejbName: SecureDeploymentManager 11:16:18,066 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI: SecureDeploymentManager/remote - EJB3.x Default Remote Business Interface SecureDeploymentManager/remote-org.jboss.deployers.spi.management.deploy.DeploymentManager - EJB3.x Remote Business Interface 11:16:18,129 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3 11:16:18,129 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureManagementView ejbName: SecureManagementView 11:16:18,160 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI: SecureManagementView/remote - EJB3.x Default Remote Business Interface SecureManagementView/remote-org.jboss.deployers.spi.management.ManagementView - EJB3.x Remote Business Interface 11:16:18,206 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureProfileService,service=EJB3 11:16:18,206 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureProfileServiceBean ejbName: SecureProfileService 11:16:18,238 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI: SecureProfileService/remote - EJB3.x Default Remote Business Interface SecureProfileService/remote-org.jboss.profileservice.spi.ProfileService - EJB3.x Remote Business Interface 11:16:18,534 INFO [TomcatDeployment] deploy, ctxPath=/admin-console 11:16:18,612 INFO [config] Initializing Mojarra (1.2_12-b01-FCS) for context '/admin-console' 11:16:21,759 INFO [TomcatDeployment] deploy, ctxPath=/ 11:16:21,853 INFO [TomcatDeployment] deploy, ctxPath=/jmx-console 11:16:21,993 INFO [JBossASKernel] Created KernelDeployment for: hapi-0.5.jar 11:16:21,993 INFO [JBossASKernel] installing bean: jboss.j2ee:ear=jca-ear-1.3-SNAPSHOT.ear,jar=hapi-0.5.jar,name=hapi-0.5,service=EJB3 11:16:21,993 INFO [JBossASKernel] with dependencies: 11:16:21,993 INFO [JBossASKernel] and demands: 11:16:21,993 INFO [JBossASKernel] and supplies: 11:16:21,993 INFO [JBossASKernel] Added bean(jboss.j2ee:ear=jca-ear-1.3-SNAPSHOT.ear,jar=hapi-0.5.jar,name=hapi-0.5,service=EJB3) to KernelDeployment of: hapi-0.5.jar 11:16:23,302 INFO [ClientENCInjectionContainer] STARTED CLIENT ENC CONTAINER: hapi-0.5 11:16:23,473 INFO [SystemEventService] NODE_STARTED on node [HCA-5C1P1BS] 11:16:23,489 INFO [AbstractConnector] [aware] connector started 11:16:23,536 INFO [AbstractConnector] [datacaptor] connector started 11:16:23,536 INFO [AbstractConnector] [intellivue] connector started 11:16:23,972 ERROR [ProfileServiceBootstrap] Failed to load profile: Summary of incomplete deployments (SEE PREVIOUS ERRORS FOR DETAILS): DEPLOYMENTS MISSING DEPENDENCIES: Deployment "gehc.com:service=KernelServiceMBean" is missing the following dependencies: Dependency "jboss.j2ee:module=kernel-ejb-1.3-SNAPSHOT.jar,service=EjbModule" (should be in state "Create", but is actually in state " NOT FOUND Depends on 'jboss.j2ee:module=kernel-ejb-1.3-SNAPSHOT.jar,service=EjbModule' ") Deployment "jboss.j2ee:module="kernel-ejb-1.3-SNAPSHOT.jar",service=EjbModule" is missing the following dependencies: Dependency "gehc.com:service=KernelServiceMBean" (should be in state "Create", but is actually in state "Configured") DEPLOYMENTS IN ERROR: Deployment "jboss.j2ee:module=kernel-ejb-1.3-SNAPSHOT.jar,service=EjbModule" is in error due to the following reason(s): ** NOT FOUND Depends on 'jboss.j2ee:module=kernel-ejb-1.3-SNAPSHOT.jar,service=EjbModule' ** 11:16:24,003 INFO [Http11Protocol] Starting Coyote HTTP/1.1 on http-127.0.0.1-8080 11:16:24,034 INFO [AjpProtocol] Starting Coyote AJP/1.3 on ajp-127.0.0.1-8009 11:16:24,050 INFO [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221053)] Started in 1m:49s:575ms I had marked the error with bold, there is some circular dependency also. Thanks Ravi S

    Read the article

  • "type" Command Not Working As Expected on Git Bash

    - by trysis
    The type command, in Linux, returns the location, on the filesystem, of the given file, if it is in the current folder or the $PATH. This functionality is also available through Windows with the Git Bash command line program. The command also returns a file's location given the file without its extension (.exe, .vbs, etc.) However, I have run into what seems like a strange corner case where the file exists on the $PATH but doesn't get returned using the command. I am thinking of buying a new computer soon, so I looked up the method of transferring the license key from one computer to another, in preparation for actually doing this. The method I found mentioned the files slmgr.vbs and slui.exe, both of which reside in the C:/Windows\System32 folder, which is in my $PATH, as usual for a Windows computer. However, these two files aren't showing up when I use the type command. Also, neither gets executed when I call the files as commands without their extensions in Git Bash, and only slmgr.vbs gets executed when I call them with the extensions. Finally, slmgr.vbs is shown when listing the folder's contents in Git Bash, as well, but slui.exe isn't. I thought this might have to do with permissions, and, indeed, both files have very restrictive permissions, as you can see in the pictures below, but they both have the same permissions, which wouldn't explain why one gets executed and the other doesn't when called directly, nor why one file is listed on command line but the other isn't. C:\Windows\System32 folder, proving the files exist: File permissions for the Users and Administrators groups for the two files (they are identical): And the folder: type command and its output in Git Bash for the 2 files, plus listing the files in the folder (using grep to filter as the folder is huge), as well as listing part of the $PATH (keep in mind, for all these, that Git Bash changes the paths as they are displayed): Sean@MYPC ~ $ type -a slmgr sh.exe": type: slmgr: not found Sean@MYPC ~ $ type -a slmgr.vbs sh.exe": type: slmgr.vbs: not found Sean@MYPC ~ $ type -a slui sh.exe": type: slui: not found Sean@MYPC ~ $ type -a slui.exe sh.exe": type: slui.exe: not found Sean@MYPC ~ $ slmgr sh.exe": slmgr: command not found Sean@MYPC ~ $ slmgr.vbs /c/WINDOWS/system32/slmgr.vbs: line 2: syntax error near unexpected token `(' /c/WINDOWS/system32/slmgr.vbs: line 2: `' Copyright (c) Microsoft Corporation. A ll rights reserved.' Sean@MYPC ~ $ slui sh.exe": slui: command not found Sean@MYPC ~ $ slui.exe sh.exe": slui.exe: command not found Sean@MYPC ~ $ ls /c/Windows/System32/slui.exe /c/Windows/System32/slmgr.vbs ls: /c/Windows/System32/slui.exe: No such file or directory /c/Windows/System32/slmgr.vbs Sean@MYPC ~ $ echo $PATH /c/Users/Sean/bin:.:/usr/local/bin:/mingw/bin:/bin:/cmd:/c/Python33/:/c/Program Files (x86)/Intel/iCLS Client/:/c/Program Files/Intel/iCLS Client/:/c/WINDOWS/sy stem32:/c/WINDOWS:/c/WINDOWS/System32/Wbem:/c/WINDOWS/System32/WindowsPowerShell /v1.0/:/c/Program Files/Intel/Intel(R) Management Engine Components/DAL:/c/Progr am Files/Intel/Intel(R) Management Engine Components/IPT:/c/Program Files (x86)/ Intel/Intel(R) Management Engine Components/DAL:/c/Program Files (x86)/Intel/Int el(R) Management Engine Components/IPT:/c/Program Files/Intel/WiFi/bin/:/c/Progr am Files/Common Files/Intel/WirelessCommon/:/c/strawberry/c/bin:/c/strawberry/pe rl/site/bin:/c/strawberry/perl/bin:/c/Program Files (x86)/Microsoft ASP.NET/ASP. NET Web Pages/v1.0/:/c/Program Files/Microsoft SQL Server/110/Tools/Binn/:/c/Pro gram Files (x86)/Microsoft SQL Server/90/Tools/binn/:/c/Program Files (x86)/Open AFS/Common:/c/HashiCorp/Vagrant/bin:/c/Program Files (x86)/Windows Kits/8.1/Wind ows Performance Toolkit/:/c/Program Files/nodejs/:/c/Program Files (x86)/Git/cmd :/c/Program Files (x86)/Git/bin:/c/Program Files/Microsoft/Web Platform Installe r/:/c/Ruby200-x64/bin:/c/Users/Sean/AppData/Local/Box/Box Edit/:/c/Program Files (x86)/SSH Communications Security/SSH Secure Shell:/c/Users/Sean/Documents/Lisp :/c/Program Files/GCL-2.6.1/lib/gcl-2.6.1/unixport:/c/Chocolatey/bin:/c/Users/Se an/AppData/Roaming/npm:/c/wamp/bin/mysql/mysql5.6.12/bin:/c/Program Files/Oracle /VirtualBox:/c/Program Files/Java/jdk1.7.0_51/bin:/c/Program Files/Node-Growl:/c /chocolatey/bin:/c/Program Files/eclipse:/c/MongoDB/bin:/c/Program Files/7-Zip:/ c/Program Files (x86)/Google/Chrome/Application:/c/Program Files (x86)/LibreOffi ce 4/program:/c/Program Files (x86)/OpenOffice 4/program What's happening? Why aren't these files listed with the type command? Is this issue because of weird Windows permissions, or something even weirder? If permissions, why do they seem to have the same permissions, yet both are not handled in the same way?

    Read the article

  • Search like google

    - by Rajanikant
    I have a task to make a search module in which i have database users and tablename userProfile and i want to search profile when i entered text in text box for ex. if i entered "I am looking for MBA in delhi" or 'mba information in delhi' it will displayed all user registered expertise as mba and city in delhi . this will be like job portal or any social networking portal my database is -- phpMyAdmin SQL Dump -- version 2.8.1 -- http://www.phpmyadmin.net -- Host: localhost -- Generation Time: May 01, 2010 at 10:58 AM -- Server version: 5.0.21 -- PHP Version: 5.1.4 -- Database: users -- -- Table structure for table userProfile CREATE TABLE userprofile ( id int(11) NOT NULL auto_increment, name varchar(50) collate latin1_general_ci NOT NULL, expertise varchar(50) collate latin1_general_ci NOT NULL, city varchar(50) collate latin1_general_ci NOT NULL, state varchar(50) collate latin1_general_ci NOT NULL, discription varchar(500) collate latin1_general_ci NOT NULL, PRIMARY KEY (id) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci AUTO_INCREMENT=3 ; -- -- Dumping data for table userProfile INSERT INTO userProfile VALUES (1, 'a', 'MBA HR', 'Delhi', 'Delhi', 'Fortune is top management college in Delhi, Best B-schools in India providing business studies and management training. FIIB is Delhi based most ranked ...'); INSERT INTO userProfile VALUES (2, 'b', 'MBA marketing', 'Delhi', 'Delhi', 'Fortune is top management college in Delhi, Best B-schools in India providing business studies and management training. FIIB is Delhi based most ranked ...'); and search.php page <?php include("config.php"); include("class.search.php"); $br=new search(); if($_POST['searchbutton']) { $str=$_POST['textfield']; $brstr=$br->breakkey($str); } ?> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Untitled Document</title> </head> <body> <table width="100%" border="0"> <form name="frmsearch" method="post"> <tr> <td width="367">&nbsp;</td> <td width="300"><label> <input name="textfield" type="text" id="textfield" size="50" /> </label></td> <td width="294"><label> <input type="submit" name="searchbutton" id="button" value="Search" /> </label></td> </tr></form> <tr> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> </table> </body> </html> and config.php is <?php error_reporting(E_ALL); $host="localhost"; $username="root"; $password=""; $dbname="users"; $con=mysql_connect($host,$username,$password) or die("could not connect database"); $db=mysql_select_db($dbname,$con) or die("could not select database"); ?> and class.search.php is <?php class search { function breakkey($key) { global $db; $words=explode(' ',$key); return $words; } function searchitem($perm) { global $db; foreach($perm as $k=>$v) { $sql="select * from users" } } } ?>

    Read the article

  • Passing a variable from Excel 2007 Custom Task Pane to Hosted PowerShell

    - by Uros Calakovic
    I am testing PowerShell hosting using C#. Here is a console application that works: using System; using System.Collections; using System.Collections.Generic; using System.Collections.ObjectModel; using System.Management.Automation; using System.Management.Automation.Runspaces; using Microsoft.Office.Interop.Excel; namespace ConsoleApplication3 { class Program { static void Main() { Application app = new Application(); app.Visible = true; app.Workbooks.Add(XlWBATemplate.xlWBATWorksheet); Runspace runspace = RunspaceFactory.CreateRunspace(); runspace.Open(); runspace.SessionStateProxy.SetVariable("Application", app); Pipeline pipeline = runspace.CreatePipeline("$Application"); Collection<PSObject> results = null; try { results = pipeline.Invoke(); foreach (PSObject pob in results) { Console.WriteLine(pob); } } catch (RuntimeException re) { Console.WriteLine(re.GetType().Name); Console.WriteLine(re.Message); } } } } I first create an Excel.Application instance and pass it to the hosted PowerShell instance as a varible named $Application. This works and I can use this variable as if Excel.Application was created from within PowerShell. I next created an Excel addin using VS 2008 and added a user control with two text boxes and a button to the addin (the user control appears as a custom task pane when Excel starts). The idea was this: when I click the button a hosted PowerShell instance is created and I can pass to it the current Excel.Application instance as a variable, just like in the first sample, so I can use this variable to automate Excel from PowerShell (one text box would be used for input and the other one for output. Here is the code: using System; using System.Windows.Forms; using System.Management.Automation; using System.Management.Automation.Runspaces; using System.Collections.ObjectModel; using Microsoft.Office.Interop.Excel; namespace POSHAddin { public partial class POSHControl : UserControl { public POSHControl() { InitializeComponent(); } private void btnRun_Click(object sender, EventArgs e) { txtOutput.Clear(); Microsoft.Office.Interop.Excel.Application app = Globals.ThisAddIn.Application; Runspace runspace = RunspaceFactory.CreateRunspace(); runspace.Open(); runspace.SessionStateProxy.SetVariable("Application", app); Pipeline pipeline = runspace.CreatePipeline( "$Application | Get-Member | Out-String"); app.ActiveCell.Value2 = "Test"; Collection<PSObject> results = null; try { results = pipeline.Invoke(); foreach (PSObject pob in results) { txtOutput.Text += pob.ToString() + "-"; } } catch (RuntimeException re) { txtOutput.Text += re.GetType().Name; txtOutput.Text += re.Message; } } } } The code is similar to the first sample, except that the current Excel.Application instance is available to the addin via Globals.ThisAddIn.Application (VSTO generated) and I can see that it is really a Microsoft.Office.Interop.Excel.Application instance because I can use things like app.ActiveCell.Value2 = "Test" (this actually puts the text into the active cell). But when I pass the Excel.Application instance to the PowerShell instance what gets there is an instance of System.__ComObject and I can't figure out how to cast it to Excel.Application. When I examine the variable from PowerShell using $Application | Get-Member this is the output I get in the second text box: TypeName: System.__ComObject Name MemberType Definition ---- ---------- ---------- CreateObjRef Method System.Runtime.Remoting.ObjRef CreateObj... Equals Method System.Boolean Equals(Object obj) GetHashCode Method System.Int32 GetHashCode() GetLifetimeService Method System.Object GetLifetimeService() GetType Method System.Type GetType() InitializeLifetimeService Method System.Object InitializeLifetimeService() ToString Method System.String ToString() My question is how can I pass an instance of Microsoft.Office.Interop.Excel.Application from a VSTO generated Excel 2007 addin to a hosted PowerShell instance, so I can manipulate it from PowerShell? (I have previously posted the question in the Microsoft C# forum without an answer)

    Read the article

  • Unable to connect SQL Server instance from Visual Studio 2008 Version 9.0

    - by salvationishere
    I am getting the below error from VS on an 32-bit XP Professional server even though I set Tools-Options-Database Tools-Data Connections to "SIDEKICK", which is the name of my computer. In other words SIDEKICK should default to the full SQLSERVER. In other words, I want VS to use SQLSERVER instead of SQLSERVER EXPRESS. And I can clearly see my database both from VS in the Server Explorer and also in SSMS 2008. Furthermore, I can view the tables of this database in Server Explorer from VS. I do not get any build errors. And it looks like all of the naming is consistent in my web.config file. I am developing my website according to a Microsoft tutorial, so I should not be getting an error. Yet I get the following exception when I run this code below: CreateAccounts.aspx.cs file protected void CreateAccountButton_Click(object sender, EventArgs e) { MembershipCreateStatus createStatus; //This line below is where the exception occurs MembershipUser newUser = Membership.CreateUser(Username.Text, Password.Text, Email.Text, passwordQuestion, SecurityAnswer.Text, true, out createStatus); And here is what the exception looks like: System.Web.HttpException was unhandled by user code Message="Unable to connect to SQL Server database." Source="System.Web" ErrorCode=-2147467259 StackTrace: at System.Web.DataAccess.SqlConnectionHelper.CreateMdfFile(String fullFileName, String dataDir, String connectionString) at System.Web.DataAccess.SqlConnectionHelper.EnsureSqlExpressDBFile(String connectionString) at System.Web.DataAccess.SqlConnectionHelper.GetConnection(String connectionString, Boolean revertImpersonation) at System.Web.Security.SqlMembershipProvider.CreateUser(String username, String password, String email, String passwordQuestion, String passwordAnswer, Boolean isApproved, Object providerUserKey, MembershipCreateStatus& status) at System.Web.Security.Membership.CreateUser(String username, String password, String email, String passwordQuestion, String passwordAnswer, Boolean isApproved, Object providerUserKey, MembershipCreateStatus& status) at System.Web.Security.Membership.CreateUser(String username, String password, String email, String passwordQuestion, String passwordAnswer, Boolean isApproved, MembershipCreateStatus& status) at Membership_CreatingUserAccounts.CreateAccountButton_Click(Object sender, EventArgs e) in c:\Documents and Settings\Admin\My Documents\Visual Studio 2008\WebSites\WebSite2\Membership\CreatingUserAccounts.aspx.cs:line 24 at System.Web.UI.WebControls.Button.OnClick(EventArgs e) at System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) at System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException: System.Web.HttpException Message="Unable to connect to SQL Server database." Source="System.Web" ErrorCode=-2147467259 StackTrace: at System.Web.Management.SqlServices.GetSqlConnection(String server, String user, String password, Boolean trusted, String connectionString) at System.Web.Management.SqlServices.SetupApplicationServices(String server, String user, String password, Boolean trusted, String connectionString, String database, String dbFileName, SqlFeatures features, Boolean install) at System.Web.Management.SqlServices.Install(String database, String dbFileName, String connectionString) at System.Web.DataAccess.SqlConnectionHelper.CreateMdfFile(String fullFileName, String dataDir, String connectionString) InnerException: System.Data.SqlClient.SqlException Message="The user instance login flag is not supported on this version of SQL Server. The connection will be closed." Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=14 LineNumber=65536 Number=18493 Procedure="" Server="." State=1 StackTrace: at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Web.Management.SqlServices.GetSqlConnection(String server, String user, String password, Boolean trusted, String connectionString) InnerException: The web.config connection string looks like: <connectionStrings> <add name="SecurityTutorialsConnectionString" connectionString="data source=.;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|SecurityTutorialsDatabase3.mdf;User Instance=true" providerName="System.Data.SqlClient"/> </connectionStrings> <system.web> <membership defaultProvider="SecurityTutorialsSqlMembershipProvider"> <providers> <add name="SecurityTutorialsSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="SecurityTutorialsConnectionString" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" applicationName="SecurityTutorials" requiresUniqueEmail="true" passwordFormat="Hashed" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="7" minRequiredNonalphanumericCharacters="1" passwordAttemptWindow="10" passwordStrengthRegularExpression=""/> </providers> </membership>

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Microsoft gets a first for it&rsquo;s cloud security

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/21/microsoft-gets-a-first-for-itrsquos-cloud-security.aspxAt http://blogs.technet.com/b/microsoft_blog/archive/2014/04/10/privacy-authorities-across-europe-approve-microsoft-s-cloud-commitments.aspx, the official Microsoft Blog by Brad Smith records that Microsoft now has approval by european data protection authorities that Microsoft’s Cloud Contracts meets EU data protection law.

    Read the article

  • Uses for Cartesian Products in MS Access

    Less well known than inner and outer joins is the Cartesian product, which produces every possible combination of records between the two tables. Doug Steele offers four examples to demonstrate some legitimate uses for Cartesian products.

    Read the article

  • Tom Cruise: Meet Fusion Apps UX and Feel the Speed

    - by ultan o'broin
    Unfortunately, I am old enough to remember, and now to admit that I really loved, the movie Top Gun. You know the one - Tom Cruise, US Navy F-14 ace pilot, Mr Maverick, crisis of confidence, meets woman, etc., etc. Anyway, one of more memorable lines (there were a few) was: "I feel the need, the need for speed." I was reminded of Tom Cruise recently. Paraphrasing a certain Senior Vice President talking about Oracle Fusion Applications and user experience at an all-hands meeting, I heard that: Applications can never be too easy to use. Performance can never be too fast. Developers, assume that your code is always "on". Perfect. You cannot overstate the user experience importance of application speed to users, or at least their perception of speed. We all want that super speed of execution and performance, and increasingly so as enterprise users bring the expectations of consumer IT into the work environment. Sten Vesterli (@stenvesterli), an Oracle Fusion Applications User Experience Advocate, also addressed the speed point artfully at an Oracle Usability Advisory Board meeting in Geneva. Sten asked us that when we next Googled something, to think about the message we see that Google has found hundreds of thousands or millions of results for us in a split second (for example, About 8,340,000 results (0.23 seconds)). Now, how many results can we see and how many can we use immediately? Yet, this simple message communicating the total results available to us works a special magic about speed, delight, and excitement that Google has made its own in the search space. And, guess what? The Oracle Application Development Framework table component relies on a similar "virtual performance boost", says Sten, when it displays the first 50 records in a table, and uses a scrollbar indicating the total size of the data record set. The user scrolls and the application automatically retrieves more records as needed. Application speed and its perception by users is worth bearing in mind the next time you're at a customer site and the IT Department demands that you retrieve every record from the database. Just think of... Dave Ensor: I'll give you all the rows you ask for in one second. If you promise to use them. (Again, hat tip to Sten.) And then maybe think of... Tom Cruise. And if you want to read about the speed of Oracle Fusion Applications, and what that really means in terms of user productivity for your entire business, then check out the Oracle Applications User Experience Oracle Fusion Applications white papers on the usable apps website.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #051

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Explanation and Understanding NOT NULL Constraint NOT NULL is integrity CONSTRAINT. It does not allow creating of the row where column contains NULL value. Most discussed questions about NULL is what is NULL? I will not go in depth analysis it. Simply put NULL is unknown or missing data. When NULL is present in database columns, it can affect the integrity of the database. I really do not prefer NULL in the database unless they are absolutely necessary. Three T-SQL Script to Create Primary Keys on Table I have always enjoyed writing about three topics Constraint and Keys, Backup and Restore and Datetime Functions. Primary Keys constraints prevent duplicate values for columns and provides a unique identifier to each column, as well it creates clustered index on the columns. 2008 Get Numeric Value From Alpha Numeric String – UDF for Get Numeric Numbers Only SQL is great with String operations. Many times, I use T-SQL to do my string operation. Let us see User Defined Function, which I wrote a few days ago, which will return only Numeric values from Alpha Numeric values. Introduction and Example of UNION and UNION ALL It is very much interesting when I get requests from blog reader to re-write my previous articles. I have received few requests to rewrite my article SQL SERVER – Union vs. Union All – Which is better for performance? with examples. I request you to read my previous article first to understand what is the concept and read this article to understand the same concept with an example. Downgrade Database for Previous Version The main questions is how they can downgrade the from SQL Server 2005 to SQL Server 2000? The answer is : Not Possible. Get Common Records From Two Tables Without Using Join Following is my scenario, Suppose Table 1 and Table 2 has same column e.g. Column1 Following is the query, 1. Select column1,column2 From Table1 2. Select column1 From Table2 I want to find common records from these tables, but I don’t want to use the Join clause because for that I need to specify the column name for Join condition. Will you help me to get common records without using Join condition? I am using SQL Server 2005. Retrieve – Select Only Date Part From DateTime – Best Practice – Part 2 A year ago I wrote a post about SQL SERVER – Retrieve – Select Only Date Part From DateTime – Best Practice where I have discussed two different methods of getting the date part from datetime. Introduction to CLR – Simple Example of CLR Stored Procedure CLR is an abbreviation of Common Language Runtime. In SQL Server 2005 and later version of it database objects can be created which are created in CLR. Stored Procedures, Functions, Triggers can be coded in CLR. CLR is faster than T-SQL in many cases. CLR is mainly used to accomplish tasks which are not possible by T-SQL or can use lots of resources. The CLR can be usually implemented where there is an intense string operation, thread management or iteration methods which can be complicated for T-SQL. Implementing CLR provides more security to the Extended Stored Procedure. 2009 Comic Slow Query – SQL Joke Before Presentation After Presentation Enable Automatic Statistic Update on Database In one of the recent projects, I found out that despite putting good indexes and optimizing the query, I could not achieve an optimized performance and I still received an unoptimized response from the SQL Server. On examination, I figured out that the culprit was statistics. The database that I was trying to optimize had auto update of the statistics was disabled. Recently Executed T-SQL Query Please refer to blog post  query to recently executed T-SQL query on database. Change Collation of Database Column – T-SQL Script – Consolidating Collations – Extention Script At some time in your DBA career, you may find yourself in a position when you sit back and realize that your database collations have somehow run amuck, or are faced with the ever annoying CANNOT RESOLVE COLLATION message when trying to join data of varying collation settings. 2010 Visiting Alma Mater – Delivering Session on Database Performance and Career – Nirma Institute of Technology Everyone always dreams of visiting their school and college, where they have studied once. It is a great feeling to see the college once again – where you have spent the wonderful golden years of your time. College time is filled with studies, education, emotions and several plans to build a future. I consider myself fortunate as I got the opportunity to study at some of the best places in the world. Change Column DataTypes There are times when I feel like writing that I am a day older in SQL Server. In fact, there are many who are looking for a solution that is simple enough. Have you ever searched online for something very simple. I often do and enjoy doing things which are straight forward and easy to change. 2011 Three DMVs – sys.dm_server_memory_dumps – sys.dm_server_services – sys.dm_server_registry In this blog post we will see three new DMVs which are introduced in Denali. The DMVs are very simple and there is not much to describe them. So here is the simple game. I will be asking a question back to you after seeing the result of the each of the DMV and you help me to complete this blog post. A Simple Quiz – T-SQL Brain Trick If you have some time, I strongly suggest you try this quiz out as it is for sure twists your brain. 2012 List All The Column With Specific Data Types in Database 5 years ago I wrote script SQL SERVER – 2005 – List All The Column With Specific Data Types, when I read it again, it is very much relevant and I liked it. This is one of the script which every developer would like to keep it handy. I have upgraded the script bit more. I have included few additional information which I believe I should have added from the beginning. It is difficult to visualize the final script when we are writing it first time. Find First Non-Numeric Character from String The function PATINDEX exists for quite a long time in SQL Server but I hardly see it being used. Well, at least I use it and I am comfortable using it. Here is a simple script which I use when I have to identify first non-numeric character. Finding Different ColumnName From Almost Identitical Tables Well here is the interesting example of how we can use sys.column catalogue views and get the details of the newly added column. I have previously written about EXCEPT over here which is very similar to MINUS of Oracle. Storing Data and Files in Cloud – Dropbox – Personal Technology Tip I thought long and hard about doing a Personal Technology Tips series for this blog.  I have so many tips I’d like to share.  I am on my computer almost all day, every day, so I have a treasure trove of interesting tidbits I like to share if given the chance.  The only thing holding me back – which tip to share first?  The first tip obviously has the weight of seeming like the most important.  But this would mean choosing amongst my favorite tricks and shortcuts.  This is a hard task. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • AutoAudit 1.10c

    - by Paul Nielsen
    AutoAudit is a free SQL Server (2005, 2008) Code-Gen utility that creates Audit Trail Triggers with: · Created, Modified, and RowVersion (incrementing INT) columns to table · Creates View to reconstruct deleted rows · Creates UDF to reconstruct Row History · Schema Audit Trigger to track schema changes · Re-code-gens triggers when Alter Table changes the table Version 1.10c Adds: · Createdby and ModifiedBy columns. Pass the user to the column and AutoAudit records that username instead of the Suser_Sname...(read more)

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >