Search Results

Search found 25872 results on 1035 pages for 'document security'.

Page 325/1035 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Best Practices - which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) One question that frequently comes up is "which types of domain should I use to run applications?" There used to be a simple answer in most cases: "only run applications in guest domains", but enhancements to T-series servers, Oracle VM Server for SPARC and the advent of SPARC SuperCluster have made this question more interesting and worth qualifying differently. This article reviews the relevant concepts and provides suggestions on where to deploy applications in a logical domains environment. Review: division of labor and types of domain Oracle VM Server for SPARC offloads many functions from the hypervisor to domains (also called virtual machines). This is a modern alternative to using a "thick" hypervisor that provides all virtualization functions, as in traditional VM designs, This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, which further improves reliability and security. In this architecture, management and I/O functionality are provided within domains. Oracle VM Server for SPARC does this by defining the following types of domain, each with their own roles: Control domain - management control point for the server, used to configure domains and manage resources. It is the first domain to boot on a power-up, is an I/O domain, and is usually a service domain as well. I/O domain - has been assigned physical I/O devices: a PCIe root complex, a PCI device, or a SR-IOV (single-root I/O Virtualization) function. It has native performance and functionality for the devices it owns, unmediated by any virtualization layer. Service domain - provides virtual network and disk devices to guest domains. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI busses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain, which is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure: guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device doesn't result in an application outage. This is also used for "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O busses, so there is more I/O capacity that can be used for applications. Increased T-series server capacity made it attractive to run more vertical applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the SPARC SuperCluster engineered system, announced a year ago at Oracle OpenWorld. In SPARC SuperCluster, I/O domains are used for high performance applications, with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is the introduction of Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. A domain with either a DIO or SR-IOV device is an I/O domain. In summary: not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O go guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm has to be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. I/O domains can be used for applications with high performance requirements. This is used to great effect in SPARC SuperCluster and in general T4 deployments. Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV) make this more attractive by giving direct I/O access to more domains. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect other domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so an interruption of service in the service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. SPARC SuperCluster use the control domain for applications, but it is an exception: it's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity T-series servers have made it more attractive to use them for applications with high resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide higher performance for critical applications.

    Read the article

  • How to convert an HTML table to an array in python

    - by user345660
    I have an html document, and I want to pull the tables out of this document and return them as arrays. I'm picturing 2 functions, one that finds all the html tables in a document, and a second one that turns html tables into 2-dimensional arrays. Something like this: htmltables = get_tables(htmldocument) for table in htmltables: array=make_array(table) There's 2 catches: 1. The number tables varies day to day 2. The tables have all kinds of weird extra formatting, like bold and blink tags, randomly thrown in. Thanks!

    Read the article

  • Duplex communication using NetTcpBinding - ContractFilter mismatch?

    - by Shaul
    I'm making slow and steady progress towards having a duplex communication channel open between a client and a server, using NetTcpBinding. (FYI, you can observe my newbie progress here and here!) I'm now at the stage where I have successfully connected to my server, through the server's firewall, and the client can make requests of the server. In the other direction, however, things aren't quite so happy. It works fine when testing on my own machine, but when testing over the internet, when I try to initiate a callback from the server side, I get an error: The message with Action 'http://MyWebService/IWebService/HelloWorld' cannot be processed at the receiver, due to a ContractFilter mismatch at the EndpointDispatcher. This may be because of either a contract mismatch (mismatched Actions between sender and receiver) or a binding/security mismatch between the sender and the receiver. Check that sender and receiver have the same contract and the same binding (including security requirements, e.g. Message, Transport, None). Here are some of the key bits of code. First, the web interface: [ServiceContract(Namespace = "http://MyWebService", SessionMode = SessionMode.Required, CallbackContract = typeof(ISiteServiceExternal))] public interface IWebService { [OperationContract] void Register(long customerID); } public interface ISiteServiceExternal { [OperationContract] string HelloWorld(); } Then, on the client side (I was fiddling with these attributes without really knowing what I'm doing): [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession, Namespace="http://MyWebService")] class SiteServer : IWebServiceCallback { string IWebServiceCallback.HelloWorld() { return "Hello World!"; } ... } So what am I doing wrong here? EDIT: Adding app.config code. From server: <system.serviceModel> <diagnostics> <messageLogging logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" logEntireMessage="true" maxMessagesToLog="1000" maxSizeOfMessageToLog="524288" /> </diagnostics> <behaviors> <serviceBehaviors> <behavior name="mex"> <serviceDebug includeExceptionDetailInFaults="true"/> <serviceMetadata/> </behavior> </serviceBehaviors> </behaviors> <services> <service name ="MyWebService.WebService" behaviorConfiguration="mex"> <endpoint address="net.tcp://localhost:8000" binding="netTcpBinding" contract="MyWebService.IWebService" bindingConfiguration="TestBinding" name="MyEndPoint"></endpoint> <endpoint address ="mex" binding="mexTcpBinding" name="MEX" contract="IMetadataExchange"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:8000"/> </baseAddresses> </host> </service> </services> <bindings> <netTcpBinding> <binding name="TestBinding" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" portSharingEnabled="false"> <readerQuotas maxDepth="32" maxStringContentLength ="8192" maxArrayLength ="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384"/> <security mode="None"/> </binding> </netTcpBinding> </bindings> </system.serviceModel> and on the client side: <system.serviceModel> <bindings> <netTcpBinding> <binding name="MyEndPoint" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="None"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign"> <extendedProtectionPolicy policyEnforcement="Never" /> </transport> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://mydomain.gotdns.com:8000/" binding="netTcpBinding" bindingConfiguration="MyEndPoint" contract="IWebService" name="MyEndPoint" /> </client> </system.serviceModel>

    Read the article

  • Windows Azure – Write, Run or Use Software

    - by BuckWoody
    Windows Azure is a platform that has you covered, whether you need to write software, run software that is already written, or Install and use “canned” software whether you or someone else wrote it. Like any platform, it’s a set of tools you can use where it makes sense to solve a problem. The primary location for Windows Azure information is located at http://windowsazure.com. You can find everything there from the development kits for writing software to pricing, licensing and tutorials on all of that. I have a few links here for learning to use Windows Azure – although it’s best if you focus not on the tools, but what you want to solve. I’ve got it broken down here into various sections, so you can quickly locate things you want to know. I’ll include resources here from Microsoft and elsewhere – I use these same resources in the Architectural Design Sessions (ADS) I do with my clients worldwide. Write Software Also called “Platform as a Service” (PaaS), Windows Azure has lots of components you can use together or separately that allow you to write software in .NET or various Open Source languages to work completely online, or in partnership with code you have on-premises or both – even if you’re using other cloud providers. Keep in mind that all of the features you see here can be used together, or independently. For instance, you might only use a Web Site, or use Storage, but you can use both together. You can access all of these components through standard REST API calls, or using our Software Development Kit’s API’s, which are a lot easier. In any case, you simply use Visual Studio, Eclipse, Cloud9 IDE, or even a text editor to write your code from a Mac, PC or Linux.  Components you can use: Azure Web Sites: Windows Azure Web Sites allow you to quickly write an deploy websites, without setting a Virtual Machine, installing a web server or configuring complex settings. They work alone, with other Windows Azure Web Sites, or with other parts of Windows Azure. Web and Worker Roles: Windows Azure Web Roles give you a full stateless computing instance with Internet Information Services (IIS) installed and configured. Windows Azure Worker Roles give you a full stateless computing instance without Information Services (IIS) installed, often used in a "Services" mode. Scale-out is achieved either manually or programmatically under your control. Storage: Windows Azure Storage types include Blobs to store raw binary data, Tables to use key/value pair data (like NoSQL data structures), Queues that allow interaction between stateless roles, and a relational SQL Server database. Other Services: Windows Azure has many other services such as a security mechanism, a Cache (memcacheD compliant), a Service Bus, a Traffic Manager and more. Once again, these features can be used with a Windows Azure project, or alone based on your needs. Various Languages: Windows Azure supports the .NET stack of languages, as well as many Open-Source languages like Java, Python, PHP, Ruby, NodeJS, C++ and more.   Use Software Also called “Software as a Service” (SaaS) this often means consumer or business-level software like Hotmail or Office 365. In other words, you simply log on, use the software, and log off – there’s nothing to install, and little to even configure. For the Information Technology professional, however, It’s not quite the same. We want software that provides services, but in a platform. That means we want things like Hadoop or other software we don’t want to have to install and configure.  Components you can use: Kits: Various software “kits” or packages are supported with just a few clicks, such as Umbraco, Wordpress, and others. Windows Azure Media Services: Windows Azure Media Services is a suite of services that allows you to upload media for encoding, processing and even streaming – or even one or more of those functions. We can add DRM and even commercials to your media if you like. Windows Azure Media Services is used to stream large events all the way down to small training videos. High Performance Computing and “Big Data”: Windows Azure allows you to scale to huge workloads using a few clicks to deploy Hadoop Clusters or the High Performance Computing (HPC) nodes, accepting HPC Jobs, Pig and Hive Jobs, and even interfacing with Microsoft Excel. Windows Azure Marketplace: Windows Azure Marketplace offers data and programs you can quickly implement and use – some free, some for-fee.   Run Software Also known as “Infrastructure as a Service” (IaaS), this offering allows you to build or simply choose a Virtual Machine to run server-based software.  Components you can use: Persistent Virtual Machines: You can choose to install Windows Server, Windows Server with Active Directory, with SQL Server, or even SharePoint from a pre-configured gallery. You can configure your own server images with standard Hyper-V technology and load them yourselves – and even bring them back when you’re done. As a new offering, we also even allow you to select various distributions of Linux – a first for Microsoft. Windows Azure Connect: You can connect your on-premises networks to Windows Azure Instances. Storage: Windows Azure Storage can be used as a remote backup, a hybrid storage location and more using software or even hardware appliances.   Decision Matrix With all of these options, you can use Windows Azure to solve just about any computing problem. It’s often hard to know when to use something on-premises, in the cloud, and what kind of service to use. I’ve used a decision matrix in the last couple of years to take a particular problem and choose the proper technology to solve it. It’s all about options – there is no “silver bullet”, whether that’s Windows Azure or any other set of functions. I take the problem, decide which particular component I want to own and control – and choose the column that has that box darkened. For instance, if I have to control the wiring for a solution (a requirement in some military and government installations), that means the “Networking” component needs to be dark, and so I select the “On Premises” column for that particular solution. If I just need the solution provided and I want no control at all, I can look as “Software as a Service” solutions. Security, Pricing, and Other Info  Security: Security is one of the first questions you should ask in any distributed computing environment. We have certification info, coding guidelines and more, even a general “Request for Information” RFI Response already created for you.   Pricing: Are there licenses? How much does this cost? Is there a way to estimate the costs in this new environment? New Features: Many new features were added to Windows Azure - a good roundup of those changes can be found here. Support: Software Support on Virtual Machines, general support.    

    Read the article

  • Using jQuery to echo text

    - by susmits
    Is it possible to use jQuery to echo text in place of a script tag? More precisely, is there a way to accomplish <script type="text/javascript"> document.write("foo"); </script> ... without the use of document.write? I am not happy about using document.write after reading this. I am aware that I could alternatively do this: <span id="container"></span> <script type="text/javascript"> $("#container").text("foo"); </script> However, I'm interested to see if there's a way to do it without using a container element, preferably using jQuery. Thanks in advance!

    Read the article

  • SharePoint - DIP problem on local copies downloaded from the portal

    - by user255234
    Our users have issues where the Document Information Panel(DIP) appears on Word, Excel, PPT documents. Here is the scenario: User downloads a copy of a template from the portal to his/her local hard drive. User edits that template and renames the file, again saving it to his/her local drive. User continues to edit and update that new document, and it continues to prompt him/her for document information as if the user was saving it on the portal. User clicks the “X” to remove this DIP bar, the next time he/she opens the local file it comes back. Is there anything I can do to make the DIP NOT SHOW on the local copies? Thanks!

    Read the article

  • Javascript DOM encoded html code problem

    - by Kubi
    var accordion = document.createTextNode(tp.getAccordionContent()); var el = document.createElement("div"); el.appendChild(accordion); document.getElementById("gp").appendChild(el); tp.getAccordion() function returns me a string like < div .... and it is being decoded. All I want is to get it encoded and implement the div which has an id = gp with that data. any help and advises would be greatly appreciated.

    Read the article

  • how to get additional response in ajax?

    - by udaya
    Hi I am using ajax to showing the error message ...I am getting the values in success: function(msgqq) This is my ajax if want to get another response then how to get ex: I want success: function(msgqq) and success: function(msg) This is my ajax function UserNameAvailablity(inp) { $.ajax({ type: "GET", url: "<?=base_url()?>/system/application/views/ssitAjax.php", cache: false, data: "txtUserName="+inp, success: function(msgqq){ document.getElementById('showErrmessage').innerHTML=msgqq; $("#showErrmessage").html(msgqq); if(txtUserName.value =="") { document.getElementById('txtUserName').focus(); document.getElementById('txtUserName').value=""; } } }); }

    Read the article

  • Windows Sharepoint Service (WSS) 3.0 search Issue

    - by dewacorp.alliances
    Based on this http://servergrrl.blogspot.com/2008/02/how-to-use-wss-30-to-search-more-than.html blog, I've created a domain account for testing and sets the permission at the side level as well as Document Library to allow specific user (DOMAIN/TestUser1) allow accessing this folder. Now, it's interesting that the user can see the document by drilling though BUT it won't be able to search and the document couldn't find it. But if I tested with the site owner (both primary and secondary), it works nicely. What did I do wrong then? It must be related on permission but I cound't work out what it is. BTW ... I am using WSS not MOSS 2007. Thanks

    Read the article

  • Mac OS X: java.lang.ClassNotFoundException: com.sun.java.browser.plugin2.DOM

    - by Thilo
    I am trying to use the new LiveConnect features introduced in Java 6 Update 10. Code looks like this (copied from the applet tutorial): Class<?> c = Class.forName("com.sun.java.browser.plugin2.DOM"); Method m = c.getMethod("getDocument", java.applet.Applet.class); Document document = (Document) m.invoke(null, this); But all I am getting is a ClassNotFoundException for the entry-point class. This on the Mac, 10.6, with both Firefox and Safari. Java Plug-in 1.6.0_22 Using JRE version 1.6.0_22-b04-307-10M3261 Java HotSpot(TM) 64-Bit Server VM Is this not implemented on the Mac? Or do I need to configure something? All I need to do is get and set the value of form elements on the page, so I would be fine with an older (pre-6u10) API if that works better.

    Read the article

  • Opening Office 2007 Documents from in memory storage - How?

    - by John S
    Hi there, I'm a C++ developer wrestling with updating an application that had made extensive use of the IStorage interface to open pre-Office 2007 documents from in-memory storage (via ILockBytes). If you are still following me so far, you probably know that the new Office Document formats are incompatible with IStorage containers. The application I'm trying to update, relied upon the IPersistStorage interface that all Office applications have, and the code as written calls the load method of IPersistStorage to read in a document from IStorage interface. So the question is.... What kind of COM interfaces are available to me to read in, from an in memory container, an Office 2007 document? John "S"

    Read the article

  • c# Regex on XML string handler

    - by Dan Sewell
    Hi guys. Trying to fiddle around with regex here, my first attempt. Im trying to extract some figures out of content from an XML tag. The content looks like this: www.blahblah.se/maps.aspx?isAlert=true&lat=51.958855252721&lon=-0.517657021473527 I need to extract the lat and long numerical vales out of each link. They will always be the same amount of characters, and the lon may or may not have a "-" sign. I thought about doing something like this below: (The string in question is in the "link" tag): var document = XDocument.Load(e.Result); if (document.Root == null) return; var events = from ev in document.Descendants("item1") select new { Title = (ev.Element("title").Value), Latitude = Regex.xxxxxxx(ev.Element("link").Value, @"lat=(?<Lat>[+-]?\d*\.\d*)", String.Empty), Longitude = Convert.ToDouble(ev.Element("link").Value), }; foreach (var ev in events) { do stuff } Many thanks!

    Read the article

  • Libraries/Solutions for using XSL to create interactive web forms from XML

    - by Brabster
    This seems like a pretty straightforward thing to do, but I can't find anything off the open-source shelf. Is there a solution already out there that does the following: can be configured with an arbitrary XSL stylesheet generates a web form based on an arbitrary XML document and the XSL creates edit functionality in appropriate places in the rendered form updates the local representation of the XML document provides capabilities to view, save the new XML document Ideally, one that plugs into a Java web application. Even better if it can generate the XSL based on schema documents - but that might not be feasible, not really thought it through. For context, I'm thinking things like pleasant-for-humans editing of Maven POMs, ANT build.xml, etc. Cheers,

    Read the article

  • Creating BlackBerry method stubs using wscompile on WSDL from ColdFusion

    - by Jim B
    I have been working on a BlackBerry application that consumes web services from ColdFusion 7. The Java ME SDK and the Java Wireless Toolkit both require that the generated WSDL be of the document/literal type. Fortunately, I have input on the web service development so I tried setting 'style="document"' in the cfcomponent tag. This generated a document/literal style WSDL but now wscompile generates the following errors in several places: Found unknown simple type: javax.xml.soap.SOAPElement Found unknown simple type: java.util.Calendar Any ideas why this is happening? The WSDL does get parsed correctly by the JWSDP tool but the stubs use namespaces that are not available in the J2ME platform. I would have thought ColdFusion WSDL would work more easily with other products in the Java family.

    Read the article

  • Ngram IDF smoothing

    - by adi92
    I am trying to use IDF scores to find interesting phrases in my pretty huge corpus of documents. I basically need something like Amazon's Statistically Improbable Phrases, i.e. phrases that distinguish a document from all the others The problem that I am running into is that some (3,4)-grams in my data which have super-high idf actually consist of component unigrams and bigrams which have really low idf.. For example, "you've never tried" has a very high idf, while each of the component unigrams have very low idf.. I need to come up with a function that can take in document frequencies of an n-gram and all its component (n-k)-grams and return a more meaningful measure of how much this phrase will distinguish the parent document from the rest. If I were dealing with probabilities, I would try interpolation or backoff models.. I am not sure what assumptions/intuitions those models leverage to perform well, and so how well they would do for IDF scores. Anybody has any better ideas?

    Read the article

  • javascript too much recursion?

    - by Ken
    Hi, I'm trying to make a script that automatically starts uploading after the data has been enter in the database(I need the autoId that the database makes to upload the file). When I run the javascript the scripts runs the php file but it fails calling the other php to upload the file. too much recursion setTimeout(testIfToegevoegd(),500); the script that gives the error send("/projects/backend/nieuwDeeltaak.php",'deeltaakNaam='+f.deeltaaknaam.value+'&beschrijving='+ f.beschrijving.value+'&startDatum='+f.startDatum.value+'&eindDatum='+f.eindDatum.value +'&deeltaakLeider='+f.leiderID.value+'&projectID='+f.projectID.value,id); function testIfToegevoegd(){ if(document.getElementById('resultaat').innerHTML == "<b>De deeltaak werd toegevoegd</b>"){ //stop met testen + upload file document.getElementById('nieuwDeeltaak').target = 'upload_target'; document.forms["nieuwDeeltaak"].submit() }else{ setTimeout(testIfToegevoegd(),500); } } testIfToegevoegd(); sorry for the dutch names we have to use them it is a school project. when I click the button that calls all this for a second time (after the error) it works fine.

    Read the article

  • Storing script files outside web root

    - by memilanuk
    I've seen recommendations to store some or all php include files some place other than in the web document root directory (username/public_html in my case) for the specific reason of protecting php files with sensitive information (like database connection and login info) in the event that the web server hiccups and stops protecting php files and they become 'visible' to outsiders who know where to look. It seems somewhat paranoid to me, but I'm guessing people have gotten burned badly on this before so I'm willing to go along. The suggestion usually takes the form of having the include files in something like '../include_files/' so its not directly in the document root and not directly accessible to outsiders through the web server. My question is this: is there a significant difference in security between that way and just putting your 'include_files' directory under the document root and sticking an .htaccess file in there (with the appropriate entries)? Would putting an .htaccess file in '../include_files/' make any significant improvement there? TIA, Monte

    Read the article

  • File Upload in GWT in a Special Case

    - by Maksud
    I am doing a software for a document system. In this system when a user completes a document and want to save it, the document will be uploaded directly to server without the user action. This system uses COM/ActiveX to facilitate user using native editors. Ok, my problem is: suppose I have a file say d:/notepad.txt. Using classical method a user can browse the file and upload it. I can do that with apache commonio and GWT FormPanel and FileUpload. But if I know the filename (d:/notepad.txt), is there any way to upload the file directly to server without the user having to browse the file. I am currently doing this by the ActiveX componenet calling some HttpUpload methods with POST. But that does not maintain session. Thanks

    Read the article

  • Using 'git pull' vs 'git checkout -f' for website deployment

    - by Michelle
    I've found two common approaches to automatically deploying website updates using a bare remote repo. The first requires that the repo is cloned into the document root of the webserver and in the post-update hook a git pull is used. cd /srv/www/siteA/ || exit unset GIT_DIR git pull hub master The second approach adds a 'detached work tree' to the bare repository. The post-receive hook uses git checkout -f to replicate the repository's HEAD into the work directory which is the webservers document root i.e. GIT_WORK_TREE=/srv/www/siteA/ git checkout -f The first approach has the advantage that changes made in the websites working directory can be committed and pushed back to the bare repo (however files should not be updated on the live server). The second approach has the advantage that the git directory is not within the document root but this is easily solved using htaccess. Is one method objectively better than the other in terms of best practice? What other advantages and disadvantages am I missing?

    Read the article

  • Notifications for Expiring DBSNMP Passwords

    - by Courtney Llamas
    Most user accounts these days have a password profile on them that automatically expires the password after a set number of days.   Depending on your company’s security requirements, this may be as little as 30 days or as long as 365 days, although typically it falls between 60-90 days. For a normal user, this can cause a small interruption in your day as you have to go get your password reset by an admin. When this happens to privileged accounts, such as the DBSNMP account that is responsible for monitoring database availability, it can cause bigger problems. In Oracle Enterprise Manager 12c you may notice the error message “ORA-28002: the password will expire within 5 days” when you connect to a target, or worse you may get “ORA-28001: the password has expired". If you wait too long, your monitoring will fail because the password is locked out. Wouldn’t it be nice if we could get an alert 10 days before our DBSNMP password expired? Thanks to Oracle Enterprise Manager 12c Metric Extensions (ME), you can! See the Oracle Enterprise Manager Cloud Control Administrator’s Guide for more information on Metric Extensions. To create a metric extension, select Enterprise / Monitoring / Metric Extensions, and then click on Create. On the General Properties screen select either Cluster Database or Database Instance, depending on which target you need to monitor.  If you have both RAC and Single instance you may need to create one for each. In this example we will create a Cluster Database metric.  Enter a Name for the ME and a Display Name. Then select SQL for the Adapter.  Adjust the Collection Schedule as desired, for this example we will collect this metric every 1 day. Notice for metric collected every day, we can determine the exact time we want to collect. On the Adapter page, enter the query that you wish to execute.  In this example we will use the query below that specifically checks for the DBSNMP user that is expiring within 10 days. Of course, you can adjust this query to alert for any user that can cause an outage such as an application account or service account such as RMAN. select username, account_status, trunc(expiry_date-sysdate) days_to_expirefrom dba_userswhere username = 'DBSNMP'and expiry_date is not null; The next step is to create the columns to store the data returned from the query.  Click Add and add a column for each of the fields in the same order that data is returned.  The table below will help you complete the column additions. Name Display Name Column Type Value Type Metric Category Unit Username User Name Key String Security AccountStatus Account Status Data String Security DaysToExpire Days Until Expiration Data Number Security Days When creating the DaysToExpire column, you can add a default threshold here for Warning and Critical (say < 10 and 5).  When all columns have been added, click Next. On the Credentials page, you can choose to use the default monitoring credentials or specify new credentials.  We will use the default credentials established for our target (dbsnmp). The next step is to test your Metric Extension.  Click on Add to select a target for testing, then click Select. Now click the button Run Test to execute the test against the selected target(s). We can see in the example below that the Metric Extension has executed and returned a value of 68 days to expire. Click Next to proceed. Review the metric extension in the final screen and click Finish. The metric will be created in Editable status.  Select the metric, click Actions and select Deployable Draft. You can do this once more to move to Published. Finally, we want to apply this metric to a target. When managing many targets, it’s best to add your metric to a template, for details on adding a Metric Extension to a template see the Administrator’s Guide. For this example, we will deploy this to a target directly. Select Actions / Deploy to Targets. Click Add and select the target you wish to deploy to and click Submit.  Once deployment is complete, we can go to the target and view the Metric & Collection Settings to see the new metric and its thresholds.   After some time, you will find the metric has collected and the days to expiration for DBSNMP user can be seen in the All Metrics view.   For metrics collected once per day, you may have to wait up to 24 hours to see the metric and current severity. In the example below, the current severity is Clear (green check) as it is not scheduled to expire within 10 days. To test the notification, we can edit the thresholds for the new metric so they trigger an alert.  Our password expires in 139 days, so we’ll change our Warning to 140 and leave Critical at 5, in our example we also changed the collection time to every 5 minutes.  At the next collection, you’ll find that the current severity changes to a Warning and any related Incident Rules would be triggered to create an Incident or Notification as desired. Now that you get a notification that your DBSNMP passwords is about to expire, you can use OEM Command Line Interface (EM CLI) verb update_db_password to change it at both the database target and the OEM target in one step.  The caveat is you must know the existing password to use the update_db_password command.  To learn more about EM CLI, see the Oracle Enterprise Manager Command Line Interface Guide.  Below is an example of changing the password with the update_db_password verb.  $ ./emcli update_db_password -target_name=emrep -target_type=oracle_database -user_name=dbsnmp -change_at_target=yes -change_all_references=yes Enter value for old_password :Enter value for new_password :Enter value for retype_new_password :Successfully submitted a job to change the password in Enterprise Manager and on the target database: "emrep"Execute "emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84" to check the status of the job.Search for job name "CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84" on the Jobs home page to check job execution details. The subsequent job created will typically run quickly enough that a blackout is not needed, however if you submit a script with many targets to change, your job may run slower so adding a blackout to the script is recommended. $ ./emcli get_jobs -job_id=FA66C1C4D663297FE0437656F20ACC84 Name Type Job ID Execution ID Scheduled Completed TZ Offset Status Status ID Owner Target Type Target Name CHANGE_PWD_JOB_FA66C1C4D662297FE0437656F20ACC84 ChangePassword FA66C1C4D663297FE0437656F20ACC84 FA66C1C4D665297FE0437656F20ACC84 2014-05-28 09:39:12 2014-05-28 09:39:18 GMT-07:00 Succeeded 5 SYSMAN oracle_database emrep After implementing the above Metric Extension and using the EM CLI update_db_password verb, you will be able to stay on top of your DBSNMP password changes without experiencing an unplanned monitoring outage.  

    Read the article

  • Using Javascript in Adobe Reader

    - by godleuf
    Hi, I am currently using the following script for a few documents: var pp = this.getPrintParams(); pp.interactive = pp.constants.interactionLevel.automatic; this.print(pp); How do I add another command, say document.close() so that it reads the print function and then follows the close document last? Do I simply add the close command right after the print command so it would read var pp = this.getPrintParams(); pp.interactive = pp.constants.interactionLevel.automatic; this.print(pp); document.close(); Thanks.

    Read the article

  • What's the best way to offer javascript embed that won't slow a page down?

    - by Shpigford
    I have a chunk of javascript that users can copy and paste to put on their sites. I'm currently using the following code (ala WEDJE) that allows the rest of the page to load even if my script is slow or not responding. <script type="text/javascript"> var number = "987654321"; var key = "123abc"; (function(){ document.write('<div id="ttp"></div>'); s=document.createElement('script'); s.type="text/javascript"; s.src="http://example.com/javascripts/embed.js?" + Math.random(); setTimeout("document.getElementById('ttp').appendChild(s);",1); })() </script> But that method is a few years old and so I wasn't sure if there was a more efficient way of doing the same thing that others have come up with.

    Read the article

  • fb:comments does not showing Log out link

    - by mahfuz05
    fb.comments does not showing Log out link here is my code < fb:comments xid="mahfuz" canpost="true" candelete="false" > < /fb:comments> here is JS SDK code <div id="fb-root"></div> <script type="text/javascript"> window.fbAsyncInit = function() { FB.init({appId: 'my app id', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); </script>

    Read the article

  • lucene get matched terms in query

    - by iamrohitbanga
    what is the best way to find out which terms in a query matched against a given document returned as a hit in lucene? I have tried a weird method involving hit highlighting package in lucene contrib and also a method that searches for every word in the query against the top most document ("docId: xy AND description: each_word_in_query"). Do not get satisfactory results? hit highlighting does not report some of the words that matched for a document other than the first one. i am not sure if the second approach is the best alternative.

    Read the article

  • Can't switch on designMode in Internet Explorer

    - by Charles Anderson
    The following code works in Firefox 3.6, but not in Internet Explorer 8: <html> <head> <title>Example</title> <script type="text/javascript"> function init() { alert(document.designMode); document.designMode = "on"; alert(document.designMode); } </script> </head> <body onload="init()"> </body> </html> In FF the alerts show 'off', then 'on'; in IE they're both 'Off'. What am I doing wrong?

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >