Search Results

Search found 34162 results on 1367 pages for 'oracle products distributed document capture'.

Page 440/1367 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • In SharePoint, why can I "multiple document upload" a 47,297 byte file, but not a 47,298 byte file?

    - by Jim
    It's strange. I can upload a document named 47k.txt that is 47,297 bytes using the "Multiple Document Upload" feature. However, if I add a single character to the end of the text file, the upload fails. Also, if I rename the file to 47k*x*.txt and try to upload it, it fails. This is the error I get in the SharePoint logs: Category: General Event ID: 8jzm Level: High Message: #90012: An error was encountered while processing files on the server. Try uploading one file at a time by using the single upload page. The same error is reported in a message box on the client side. Does anybody know why this would happen?

    Read the article

  • How can I convert an OpenOffice document to PDF from the Linux command line?

    - by Norman Ramsey
    I have students who, when asked for PDF, sometimes hand me an OpenOffice document or spreadsheet. file(1) can identify these documents, but I've been unable to discover how to convert them to PDF using the command line. (The man page for ooffice(1) lists an option to print a document but not to convert to PDF.) Google is unhelpful, except for giving me the uneasy feeling that this can't be done without a nifty script in a language I don't know against an API whose documentation I can't find. Can anyone help me solve the problem of converting an OpenDocument to PDF using only the Unix command line?

    Read the article

  • Docbook: Centralized glossary, where each document includes only terms which appear in it?

    - by DanM
    Trying to figure out if this (or something similar) is possible. I'm working with a collection of technical documents, all written in DocBook. The documents each contain many acronyms, technical terms and other jargon, so we need to include a glossary with each of them. The ideal situation would be this: I have a central glossary.xml file which contains a glossentry item (or similar) for each such term; then, each of the documents uses that glossary file, but only prints out the terms which appear IN that document. So, each document has its own glossary printed at the end, but the actual glossary entries are stored centrally. Is that doable?

    Read the article

  • Is there any way i can remove line breaks (not paragraph breaks) from a word document quickly?

    - by metal gear solid
    Is there any way i can remove line breaks (not paragraph breaks) from a word document quickly? i have a large document in columns like this: xxxxx x xxxx xxx xxxx xx xxxxxx x xxx x xx xxxxxxx xx xxxxx xxx xxxxx x xxxx xxx xxxx xx xxxxxx x xxx x xx xxxxxxx xx xxxxx xxx xxxxx x xxxx xxx xxxx xx xxxxxx x xxx x xx xxxxxxx xx xxxxx xxx and i need to remove the line breaks so it's like this xxxxx xxxxxx xxxxxxx xxxxxx xxxxxx xxxxxxxx xxxxxx x xxxx xx xxxx xxxx xxxxxxxxxxx x xxxxxxxx x x xxxxxxxxxxxxxx xxxx xxx xxxx xxxxxx xxxxx xxxxxx xxxxxxx xxxxxx xxxxxx xxxxxxxx xxxxxx x xxxx xx xxxx xxxx xxxxxxxxxxx x xxxxxxxx x x xxxxxxxxxxxxxx xxxx xxx xxxx xxxxxx xxxxx xxxxxx xxxxxxx xxxxxx xxxxxx xxxxxxxx xxxxxx x xxxx xx xxxx xxxx xxxxxxxxxxx x xxxxxxxx x x xxxxxxxxxxxxxx xxxx xxx xxxx xxxxxx

    Read the article

  • How can I refresh a document I have open in Excel in read-only mode?

    - by RoboShop
    I have an Excel document that is stored on a SharePoint Server, which I always have open on my computer in read-only mode because I need to refer to it. Every so often, in order to get the latest changes, I have to close down the file and reload it again. Are there any options within Excel 2007 which allow me to simply refresh a document I have open in read-only mode to the latest version on the server? Better still, is there a way where this could be done dynamically, without me having to hit refresh?

    Read the article

  • Why does my excel document have 960,000 empty rows?

    - by C-dizzle
    I have an excel document, Office 2007, on a Windows 7 machine (if that part matters any, I'm not sure but just throwing it out there). It is a list of all employee phone numbers. If I need to generate a new page, I can click on page 2 and the table will automatically generate again. The problem is, someone messed it up since it's on a network drive and now shows I have over 960,000 rows of data, when I really don't! I did CTRL+END to see if any data was in the last cell, so I cleared it out, deleted that row and column, but still didn't fix it. It almost seems like it duplicates itself after the deletion. How can I fix this instead of recreating the entire document?

    Read the article

  • What is the proper way to set up the Apache document root in terms of privileges?

    - by racl101
    I have just installed Ubuntu 9.10 server edition on my machine and I wish to run my own personal local server with other users in the same LAN. First, I was wondering what folder directory structure is best for the web root? Should I just use: /var/www/ and start throwing web documents there or should I create a folder elsewhere (maybe the home directory)? Second, in the /var/www/ directory only the root user can create documents in there, however, I wish to have other users be able to create files in the document root and upload them via FTP. Should I change the permissions or the www/ folder? Or again, should I create the document root elsewhere with different permissions? What is the safest way of doing this?

    Read the article

  • How can I see when the txt file was embedded to a word document?

    - by nono
    Is there a possibility to search when the embedded txt file was created in the word document? I'm working in Word 2010 and the extension of the document is simple: .doc It was inserted using the Word options: Insert -> Object -> Create from file -> Insert as icon Thank you, I already tried the right click and properties option, but the problem with it, that it shows only the current date for all of the 3 options (created/modified/accessed). I also tried to get the timestamp but it is inactive when I'm on the txt object. Sorry not to mention it before: Thank you all of you here for the help and support it is really appreciated.

    Read the article

  • How to implement multi-source XSLT mapping in 11g BPEL

    - by [email protected]
    In SOA 11g, you can create a XSLT mapper that uses multiple sources as the input. To implement a multi-source mapper, just follow the instructions below, Drag and drop a Transform Activity to a BPEL process Double-click on the Transform Activity, the Transform dialog window appears. Add source variables by clicking the Add icon and selecting the variable and part of the variable as needed. You can select multiple input variables. The first variable represents the main XML input to the XSL mapping, while additional variables that are added here are defined in the XSL mapping as input parameters. Select the target variable and its part if available. Specify the mapper file name, the default file name is xsl/Transformation_%SEQ%.xsl, where %SEQ% represents the sequence number of the mapper. Click OK, the xls file will be opened in the graphical mode. You can map the sources to the target as usual. Open the mapper source code, you will notice the variable representing the additional source payload, is defined as the input parameter in the map source spec and body<mapSources>    <source type="XSD">      <schema location="../xsd/po.xsd"/>      <rootElement name="PurchaseOrder" namespace="http://www.oracle.com/pcbpel/po"/>    </source>    <source type="XSD">      <schema location="../xsd/customer.xsd"/>      <rootElement name="Customer" namespace="http://www.oracle.com/pcbpel/Customer"/>      <param name="v_customer" />    </source>  </mapSources>...<xsl:param name="v_customer"/> Let's take a look at the BPEL source code used to execute xslt mapper. <assign name="Transform_1">            <bpelx:annotation>                <bpelx:pattern>transformation</bpelx:pattern>            </bpelx:annotation>            <copy>                <from expression="ora:doXSLTransformForDoc('xsl/Transformation_1.xsl',bpws:getVariableData('v_po'),'v_customer',bpws:getVariableData('v_customer'))"/>                <to variable="v_invoice"/>            </copy>        </assign> You will see BPEL uses ora:doXSLTransformForDoc XPath function to execute the XSLT mapper.This function returns the result of  XSLT transformation when the xslt template matching the document. The signature of this function is  ora:doXSLTransformForDoc(template,input, [paramQName, paramValue]*).Wheretemplate is the XSLT mapper nameinput is the string representation of xml input, paramQName is the parameter defined in the xslt mapper as the additional sourceparameterValue is the additional source payload. You can add more sources to the mapper at the later stage, but you have to modify the ora:doXSLTransformForDoc in the BPEL source code and make sure it passes correct parameter and its value pair that reflects the changes in the XSLT mapper.So the best practices are : create the variables before creating the mapping file, therefore you can add multiple sources when you define the transformation in the first place, which is more straightforward than adding them later on. Review ora:doXSLTransformForDoc code in the BPEL source and make sure it passes the correct parameters to the mapper.

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • Deploying BAM Data Control Application to WLS server

    - by [email protected]
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-15829414-1"); pageTracker._trackPageview(); } catch(err) {} Typically we would test our ADF pages that use BAM Data control using integrated wls server (ADRS). If we have to deploy this same application to a standalone WLS we have to make sure we have the BAM server connection created in WLS.unless we do that we may face runtime errors.In Development mode of WLS(Reference) For development-mode WebLogic Server, you can set the mode to OVERWRITE to test user names and passwords. You can set the mode by running setDomainEnv.cmd or setDomainEnv.sh with the following option added to the command. Add the following to the JAVA_PROPERTIES entry in the <FMW_HOME>/user_projects/domains/<yourdomain>/bin/setDomainEnv.sh file: -Djps.app.credential.overwrite.allowed=true In Production mode of WLS Enable MDS Create and/or Register your MDS repository. For more details refer this Edit adf-config.xml from your application and add the following tag <adf-mds-config xmlns="http://xmlns.oracle.com/adf/mds/config">     <mds-config version="11.1.1.000">     <persistence-config>   <metadata-store-usages>     <metadata-store-usage default-cust-store="true" deploy-target="true" id="myRepos">     </metadata-store-usage>   </metadata-store-usages>   </persistence-config>           </mds-config>  </adf-mds-config>Deploy the application to WLS server after picking the appropriate repository during deployment from the MDS Repository dialog that pops up Enterprise Manager (Use these steps if using a version prior to 11gR1 PS1 release of JDeveloper) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, top dropdown select "System Mbean Browser->oracle.adf.share.connections->Server: AdminServer->Server: AdminServer->Application:<Appname>->ADFConnections"Right pane click "Operations->CreateConnection"Enter Connection Type as "BAMConnection"Enter the connection name same as the one defined in JdevClick "Invoke"Click "Return"Click on Operation->SaveNow in the ADFConnections in the navigator, select the connection just created and enter all the configuration details.Save and run the page. Enterprise Manager (Use these steps or the steps above if using 11gR1 PS1 or newer) Go to EM (http://<host>:<port>/EMIn the left pane, deployments select Application1(your application)In the right pane, click on "Application Deployment" to invoke to dropdown. In that select "ADF -> Configure ADF Connections"Select Connection Type as "BAM" from the drop downEnter Connection Type as to be the same as the one defined in JDevClick on "Create Connection". This should add a new row below under "BAM Connections"Select the new connection and click on the "Edit" icon. This should bring up a dialogSpecific appropriate values for all connection parameters - Username, password, BAM Server Host, BAM Server Port, Webtier Server Host, Webtier Server Port and BAM Webtier Protocol - and then click on OK to dismiss the dialogClick on "Apply"Run the page page.

    Read the article

  • Using Managed Beans with your ADF Mobile Client Applications

    - by [email protected]
    Did you know it's easy to extend your ADF Mobile Client application with a Managed Bean just like it is with an ADF web application?  Here's how: Using the New Gallery (File -> New), create a new Java class.  This class should extend oracle.adfnmc.el.utils.BeanResolver.         Add this java class as a managed bean: Go to your task flow, select the Overview tab at the bottom and go to the Managed Bean section.  Add an entry and name your new Managed Bean and point to the java class you just created.        Add your custom methods and properties to your java class   Since reflection is not supported in the J2ME version on some platforms (BlackBerry), you need to provide dispatch code if you want to invoke/access any of your methods/properties from EL.  Here's a sample:  MyBeanClass.java    Use Expression Language (EL) to access your properties and invoke your methods on your MCX pages.  Here's an sample:     <?xml version="1.0" encoding="UTF-8" ?><amc:view xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"          xmlns:amc="http://xmlns.oracle.com/jdev/amc">  <amc:form id="form0">    <amc:menuControl refId="menu0"/>    <amc:panelGroupLayout id="panelGroupLayout1" width="100%">      <amc:panelGroupLayout id="panelGroupLayout2" layout="horizontal"                            width="100%">        <amc:image id="image1" source="logo_sm.png"/>        <amc:outputText value="Home" id="outputText1" verticalAlign="center"                        fontSize="20" fontWeight="bold"                        foregroundColor="#ff0000"/>      </amc:panelGroupLayout>      <amc:commandLink text="#{MyBean.property1}" id="commandLink1"                       actionListener="#{MyBean.doFoo}"                       foregroundColor="#0000ff" action="patientlist"/>    </amc:panelGroupLayout>  </amc:form>  <amc:menu type="main" id="menu0">    <amc:menuGroup id="menuGroup1">      <amc:commandMenuItem id="commandMenuItem1" action="exit" label="Exit"                           index="1" weight="0"/>    </amc:menuGroup>  </amc:menu></amc:view> 

    Read the article

  • Is Financial Inclusion an Obligation or an Opportunity for Banks?

    - by tushar.chitra
    Why should banks care about financial inclusion? First, the statistics, I think this will set the tone for this blog post. There are close to 2.5 billion people who are excluded from the banking stream and out of this, 2.2 billion people are from the continents of Africa, Latin America and Asia (McKinsey on Society: Global Financial Inclusion). However, this is not just a third-world phenomenon. According to Federal Deposit Insurance Corp (FDIC), in the US, post 2008 financial crisis, one family out of five has either opted out of the banking system or has been moved out (American Banker). Moving this huge unbanked population into mainstream banking is both an opportunity and a challenge for banks. An obvious opportunity is the significant untapped customer base that banks can target, so is the positive brand equity a bank can build by fulfilling its social responsibilities. Also, as banks target the cost-conscious unbanked customer, they will be forced to look at ways to offer cost-effective products and services, necessitating technology upgrades and innovations. However, cost is not the only hurdle in increasing the adoption of banking services. The potential users need to be convinced of the benefits of banking and banks will also face stiff competition from unorganized players. Finally, the banks will have to believe in the viability of this business opportunity, and not treat financial inclusion as an obligation. In what ways can banks target the unbanked For financial inclusion to be a success, banks should adopt innovative business models to develop products that address the stated and unstated needs of the unbanked population and also design delivery channels that are cost effective and viable in the long run. Through business correspondents and facilitators In rural and remote areas, one of the major hurdles in increasing banking penetration is connectivity and accessibility to banking services, which makes last mile inclusion a daunting challenge. To address this, banks can avail the services of business correspondents or facilitators. This model allows banks to establish greater connectivity through a trusted and reliable intermediary. In India, for instance, banks can leverage the local Kirana stores (the mom & pop stores) to service rural and remote areas. With a supportive nudge from the central bank, the commercial banks can enlist these shop owners as business correspondents to increase their reach. Since these neighborhood stores are acquainted with the local population, they can help banks manage the KYC norms, besides serving as a conduit for remittance. Banks also have an opportunity over a period of time to cross-sell other financial products such as micro insurance, mutual funds and pension products through these correspondents. To exercise greater operational control over the business correspondents, banks can also adopt a combination of branch and business correspondent models to deliver financial inclusion. Through mobile devices According to a 2012 world bank report on financial inclusion, out of a world population of 7 billion, over 5 billion or 70% have mobile phones and only 2 billion or 30% have a bank account. What this means for banks is that there is scope for them to leverage this phenomenal growth in mobile usage to serve the unbanked population. Banks can use mobile technology to service the basic banking requirements of their customers with no frills accounts, effectively bringing down the cost per transaction. As I had discussed in my earlier post on mobile payments, though non-traditional players have taken the lead in P2P mobile payments, banks still hold an edge in terms of infrastructure and reliability. Through crowd-funding According to the Crowdfunding Industry Report by Massolution, the global crowdfunding industry raised $2.7 billion in 2012, and is projected to grow to $5.1 billion in 2013. With credit policies becoming tighter and banks becoming more circumspect in terms of loan disbursals, crowdfunding has emerged as an alternative channel for lending. Typically, these initiatives target the unbanked population by offering small loans that are unviable for larger banks. Though a significant proportion of crowdfunding initiatives globally are run by non-banking institutions, banks are also venturing into this space. The next step towards inclusive finance Banks by themselves cannot make financial inclusion a success. There is a need for a whole ecosystem that is supportive of this mission. The policy makers, that include the regulators and government bodies, must be in sync, the IT solution providers must put on their thinking caps to come out with innovative products and solutions, communication channels such as internet and mobile need to expand their reach, and the media and the public need to play an active part. The other challenge for financial inclusion is from the banks themselves. While it is true that financial inclusion will unleash a hitherto hugely untapped market, the normal banking model may be found wanting because of issues such as flexibility, convenience and reliability. The business will be viable only when there is a focus on increasing the usage of existing infrastructure and that is possible when the banks can offer the entire range of products and services to the large number of users of essential banking services. Apart from these challenges, banks will also have to quickly master and replicate the business model to extend their reach to the remotest regions in their respective geographies. They will need to ensure that the transactions deliver a viable business benefit to the bank. For tapping cross-sell opportunities, banks will have to quickly roll-out customized and segment-specific products. The bank staff should be brought in sync with the business plan by convincing them of the viability of the business model and the need for a business correspondent delivery model. Banks, in collaboration with the government and NGOs, will have to run an extensive financial literacy program to educate the unbanked about the benefits of banking. Finally, with the growing importance of retail banking and with many unconventional players eyeing the opportunity in payments and other lucrative areas of banking, banks need to understand the importance of micro and small branches. These micro and small branches can help banks increase their presence without a huge cost burden, provide bankers an opportunity to cross sell micro products and offer a window of opportunity for the large non-banked population to transact without any interference from intermediaries. These branches can also help diminish the role of the unorganized financial sector, such as local moneylenders and unregistered credit societies. This will also help banks build a brand awareness and loyalty among the users, which by itself has a cascading effect on the business operations, especially among the rural and un-banked centers. In conclusion, with the increasingly competitive banking sector facing frequent slowdowns and downturns, the unbanked population presents a huge opportunity for banks to enhance their customer base and fulfill their social responsibility.

    Read the article

  • Security Access Control With Solaris Virtualization

    - by Thierry Manfe-Oracle
    Numerous Solaris customers consolidate multiple applications or servers on a single platform. The resulting configuration consists of many environments hosted on a single infrastructure and security constraints sometimes exist between these environments. Recently, a customer consolidated many virtual machines belonging to both their Intranet and Extranet on a pair of SPARC Solaris servers interconnected through Infiniband. Virtual Machines were mapped to Solaris Zones and one security constraint was to prevent SSH connections between the Intranet and the Extranet. This case study gives us the opportunity to understand how the Oracle Solaris Network Virtualization Technology —a.k.a. Project Crossbow— can be used to control outbound traffic from Solaris Zones. Solaris Zones from both the Intranet and Extranet use an Infiniband network to access a ZFS Storage Appliance that exports NFS shares. Solaris global zones on both SPARC servers mount iSCSI LU exported by the Storage Appliance.  Non-global zones are installed on these iSCSI LU. With no security hardening, if an Extranet zone gets compromised, the attacker could try to use the Storage Appliance as a gateway to the Intranet zones, or even worse, to the global zones as all the zones are reachable from this node. One solution consists in using Solaris Network Virtualization Technology to stop outbound SSH traffic from the Solaris Zones. The virtualized network stack provides per-network link flows. A flow classifies network traffic on a specific link. As an example, on the network link used by a Solaris Zone to connect to the Infiniband, a flow can be created for TCP traffic on port 22, thereby a flow for the ssh traffic. A bandwidth can be specified for that flow and, if set to zero, the traffic is blocked. Last but not least, flows are created from the global zone, which means that even with root privileges in a Solaris zone an attacker cannot disable or delete a flow. With the flow approach, the outbound traffic of a Solaris zone is controlled from outside the zone. Schema 1 describes the new network setting once the security has been put in place. Here are the instructions to create a Crossbow flow as used in Schema 1 : (GZ)# zoneadm -z zonename halt ...halts the Solaris Zone. (GZ)# flowadm add-flow -l iblink -a transport=TCP,remote_port=22 -p maxbw=0 sshFilter  ...creates a flow on the IB partition "iblink" used by the zone to connect to the Infiniband.  This IB partition can be identified by intersecting the output of the commands 'zonecfg -z zonename info net' and 'dladm show-part'.  The flow is created on port 22, for the TCP traffic with a zero maximum bandwidth.  The name given to the flow is "sshFilter". (GZ)# zoneadm -z zonename boot  ...restarts the Solaris zone now that the flow is in place.Solaris Zones and Solaris Network Virtualization enable SSH access control on Infiniband (and on Ethernet) without the extra cost of a firewall. With this approach, no change is required on the Infiniband switch. All the security enforcements are put in place at the Solaris level, minimizing the impact on the overall infrastructure. The Crossbow flows come in addition to many other security controls available with Oracle Solaris such as IPFilter and Role Based Access Control, and that can be used to tackle security challenges.

    Read the article

  • ??????????? - Java SE Embedded 8

    - by kshimizu-Oracle
    Java?OS??????1?????????????????????????????????3?????????????? HEAP: Java????????????????????????????????? NON-HEAP: NON-HEAP????JVM???????????????????Code Cache?Metaspace???2????????????? Code Cache: ????JIT??????????????????????????? Metaspace: HEAP??????????????????????????   JavaVM??????????: VM?????????????????? ??????????????? ????????????????????????????????????????????????????????????????????????? HEAP?Java Mission Control???????????????????? (????)? ????Java SE?????????????API????????????????????????????????????? Mission Control?????API?????????????????????????????????API??????????????? HEAP???????????? VM????????"-Xmx"???????????????? java.lang.Runtime.maxMemory(); ?????HEAP????????? ?????VM????????"-Xms"? ????????????? "-Xms"???????"-Xmx"?????????? java.lang.Runtime.totalMemory(); ???????????HEAP????????????? java.lang.Runtime.freeMemory(); ??NON-HEAP???????????? API??????????? Java Mission Control?????????? ????????????Java Mission Control??????????????????????? ????"NON_HEAP"?????????NON-HEAP?????? ???? HEAP????NON-HEAP?????????????? Java VM???????????????????????????????????????? ?????????????????????????????????? ????HEAP/NON-HEAP?????????????????????????? OS?????????????? Linux???????procfs?Java??????????????????? (VmHWM or VmRSS) ????? ????HEAP/NON-HEAP??????????????????????????? ?????????????????? ??????JVM?????????????????? ?????????????????JVM???????????????????? ???JVM?????? ????????????? Embedded??JVM?????????? ??Embedded???Oracle JVM??????CPU????????????????????????????????????????? ??????CPU??????????????????????????????????????? Minimal/Client/Server??JVM???????????????? ????JVM??????????????????? ??????Compact????????????????? ? 2 - 3?????? Concept Guide (http://docs.oracle.com/javase/8/embedded/embedded-concepts/basic-concepts.htm) ???????? ??JVM??????????? ????????????????????? -Xms: ??????????? ?????????? ?????????????????????????????????????????????????? -Xmx: ??????????? -XX:ReservedCodeCacheSize: Code Cache??????? ?) JIT??????????????Code Cache????????????0???????? -Xint: JIT??????????? ????????????? JIT?????????????????????? ????????????????? -Xss: ???????????????????? ????????????????????????? ????????????????????????????? -XX:CompileThreshold: JIT?????????????????????????????????? ?????????????????????? ????????? ?????????????????? Code Cache?????????? ?????????? ????????????????????? ????????????????????????? ??????????????????????? ?????????????????????

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Watch the AutoVue release 20.0 Webcast - April 27 at 12pm EST

    - by [email protected]
    Join our live Webcast on Tuesday, April 27th, 2010 to discover how AutoVue release 20.0 can help you to: • Improve technical and business decision-making with visual access to accurate, in context information • Increase operational efficiency by integrating and visually enabling existing enterprise systems • Drive innovation by enhancing enterprise-wide document collaboration capabilities • Mitigate project risk with a reliable audit trail of changes and approvals Click here to register for the Webcast

    Read the article

  • Unable to Update Sharepoint Document Properties : Required Fields are Empty.

    - by Pari
    Hi, I am updating Documents on Sharepoint using List.asmx web service. But problem i am facing is: Fields are not getting Updated as some of required fields are not added. But to fill required fields i have to again Update. "ID" field is compulsary at the time of Update. Which we get only after uploading Document.( we get this id by "ows_id" attriute value) Edit : As said by "Janis Veinbergs" We can't get this ID untill document is actualy saved. So how will i update document as ID field is must for Update. If i don't Put ID Field : Error : 0x8102000aInvalid URL Parameter The URL provided contains an invalid Command or Value. Please check the URL again. If i put Null Value to it: Error :0x81020016Item does not exist The page you selected contains an item that does not exist. It may have been deleted by another user. Is there any way to set document properties at the time of uploaidng files on Sharepoint. *Note : i am uploading file in Chunck.And Not using Microsoft.sharepoint.dll * Language : C# I tried this code But here again properties are being set after uploading file.

    Read the article

  • How to get a result from output parameter(SYS_REFCURSOR) of Oracle stored procedure in iBATIS 3(by u

    - by yjacket
    I got an example how to call oracle SP in iBATIS 3 without a map file. And now I understand how to call it. But I got another problem that how to get a result from output parameter(Oracle cursor). A part of exception messages is "There is no setter for property named 'rs' in 'class java.lang.Class". Below is my code. Does anyone can help me? Oracle Stored Procedure: CREATE OR REPLACE PROCEDURE getProducts ( rs OUT SYS_REFCURSOR ) IS BEGIN OPEN rs FOR SELECT * FROM Products; END getProducts; Interface: public interface ProductMapper { @Select("call getProducts(#{rs,mode=OUT,jdbcType=CURSOR})") @Options(statementType = StatementType.CALLABLE) List<Product> getProducts(); } DAO: public class ProductDAO { public List<Product> getProducts() { return mapper.getProducts(); // mapper is ProductMapper } } Full Error Message: Exception in thread "main" org.apache.ibatis.exceptions.IbatisException: ### Error querying database. Cause: org.apache.ibatis.reflection.ReflectionException: Could not set property 'rs' of 'class org.apache.ibatis.reflection.MetaObject$NullObject' with value 'oracle.jdbc.driver.OracleResultSetImpl@1a001ff' Cause: org.apache.ibatis.reflection.ReflectionException: There is no setter for property named 'rs' in 'class java.lang.Class' ### The error may involve defaultParameterMap ### The error occurred while setting parameters ### Cause: org.apache.ibatis.reflection.ReflectionException: Could not set property 'rs' of 'class org.apache.ibatis.reflection.MetaObject$NullObject' with value 'oracle.jdbc.driver.OracleResultSetImpl@1a001ff' Cause: org.apache.ibatis.reflection.ReflectionException: There is no setter for property named 'rs' in 'class java.lang.Class' at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:8) at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:61) at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:53) at org.apache.ibatis.binding.MapperMethod.executeForList(MapperMethod.java:82) at org.apache.ibatis.binding.MapperMethod.execute(MapperMethod.java:63) at org.apache.ibatis.binding.MapperProxy.invoke(MapperProxy.java:35) at $Proxy8.getList(Unknown Source) at com.dao.ProductDAO.getList(ProductDAO.java:42) at com.Ibatis3Test.main(Ibatis3Test.java:30) Caused by: org.apache.ibatis.reflection.ReflectionException: Could not set property 'rs' of 'class org.apache.ibatis.reflection.MetaObject$NullObject' with value 'oracle.jdbc.driver.OracleResultSetImpl@1a001ff' Cause: org.apache.ibatis.reflection.ReflectionException: There is no setter for property named 'rs' in 'class java.lang.Class' at org.apache.ibatis.reflection.wrapper.BeanWrapper.setBeanProperty(BeanWrapper.java:154) at org.apache.ibatis.reflection.wrapper.BeanWrapper.set(BeanWrapper.java:36) at org.apache.ibatis.reflection.MetaObject.setValue(MetaObject.java:120) at org.apache.ibatis.executor.resultset.FastResultSetHandler.handleOutputParameters(FastResultSetHandler.java:69) at org.apache.ibatis.executor.statement.CallableStatementHandler.query(CallableStatementHandler.java:44) at org.apache.ibatis.executor.statement.RoutingStatementHandler.query(RoutingStatementHandler.java:55) at org.apache.ibatis.executor.SimpleExecutor.doQuery(SimpleExecutor.java:41) at org.apache.ibatis.executor.BaseExecutor.query(BaseExecutor.java:94) at org.apache.ibatis.executor.CachingExecutor.query(CachingExecutor.java:72) at org.apache.ibatis.session.defaults.DefaultSqlSession.selectList(DefaultSqlSession.java:59) ... 7 more Caused by: org.apache.ibatis.reflection.ReflectionException: There is no setter for property named 'rs' in 'class java.lang.Class' at org.apache.ibatis.reflection.Reflector.getSetInvoker(Reflector.java:300) at org.apache.ibatis.reflection.MetaClass.getSetInvoker(MetaClass.java:97) at org.apache.ibatis.reflection.wrapper.BeanWrapper.setBeanProperty(BeanWrapper.java:146) ... 16 more

    Read the article

  • How do I dynamically create a document for download in Javascript?

    - by Nelson
    I'm writing some Javascript code that generates an XML document in the client (via Google Earth plugin). I'd like the user to be able to click a button on the page and be prompted to save that XML to a new file. If I were generating the XML server-side this would be easy, just make the button open the link. But the XML is generated client-side. I've come up with a couple of hacks that half-work, inspired in part by this StackOverflow question. But neither completely work. Here's a demo HTML with embedded code: <html><head><script> function getData() { return '<?xml version="1.0" encoding="UTF-8"?><doc>Hello</doc>'; } function dlDataURI() { window.open("data:text/xml;charset=utf-8," + getData()); } function dlWindow() { var w = window.open(); w.document.open(); w.document.write(getData()); w.document.close(); } </script><body> <div onclick="dlDataURI()">Click for Data URL</div> <div onclick="dlWindow()">Click for Window</div> </body></html> The dlDataURI() version works great in Firefox, poorly in Chrome (can't save), and not at all in IE. The Window() version works OK in Firefox and IE, and not well in Chrome (can't save, XML embedded inside HTML). Neither version ever prompts a user download, it always opens a new window trying to display the XML. Is there a good way to do what I want in client side Javascript? I'd like this to work in today's browsers, ideally Firefox, MSIE 8, and Chrome.

    Read the article

  • How to optimize this JSON/JQuery/Javascript function in IE7/IE8?

    - by melaos
    hi guys, i'm using this function to parse this json data but i find the function to be really slow in IE7 and slightly slow in IE8. basically the first listbox generate the main product list, and upon selection of the main list, it will populate the second list. this is my data: [{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":15913,"ProductName":"Creative Xmod","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":15913,"ProductName":"Creative Xmod","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":18094,"ProductName":"Sound Blaster Wireless Receiver","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":16185,"ProductName":"Xdock Wireless","ProductServiceLifeId":1},{"ProductCategoryId":209,"ProductCategoryName":"X-Fi","ProductSubCategoryId":668,"ProductSubCategoryName":"External Solutions","ProductId":16186,"ProductName":"Xmod Wireless","ProductServiceLifeId":1}] and these are the functions that i'm using: //Three Product Panes function function populateMainPane() { $.getJSON('/Home/ThreePaneProductData/', function(data) { products = data; alert(JSON.stringify(products)); var prodCategory = {}; for (i = 0; i < products.length; i++) { prodCategory[products[i].ProductCategoryId] = products[i].ProductCategoryName; } //end for //take only unique product category to be used var id = 0; for (id in prodCategory) { if (prodCategory.hasOwnProperty(id)) { $(".LBox1").append("<option value='" + id + "'>" + prodCategory[id] + "</option>"); //alert(prodCategory[id]); } } var url = document.location.href; var parms = url.substring(url.indexOf("?") + 1).split("&"); for (var i = 0; i < parms.length; i++) { var parm = parms[i].split("="); if (parm[0].toLowerCase() == "pid") { $(".PanelProductReg").show(); var nProductIds = parm[1].split(","); for (var k = 0; k < nProductIds.length; k++) { var nProductId = parseInt(nProductIds[k], 10); for (var j = 0; j < products.length; j++) { if (nProductId == parseInt(products[j].ProductId, 10)) { addProductRow(nProductId, products[j].ProductName); j = products.length; } } //end for } } } }); } //end function function populateSubCategoryPane() { var subCategory = {}; for (var i = 0; i < products.length; i++) { if (products[i].ProductCategoryId == $('.LBox1').val()) subCategory[products[i].ProductSubCategoryId] = products[i].ProductSubCategoryName; } //end for //clear off the list box first $(".LBox2").html(""); var id = 0; for (id in subCategory) { if (subCategory.hasOwnProperty(id)) { $(".LBox2").append("<option value='" + id + "'>" + subCategory[id] + "</option>"); //alert(prodCategory[id]); } } } //end function is there anything i can do to optimize this or is this a known browser issue?

    Read the article

  • Customer Support Identifier-How to get this?

    - by megala
    I was trying to create an account in this site which gives you complete functional implementation of the Oracle Applications 11i ( http://vis11510.solutionbeacon.net/OA_HTML/AppsLocalLogin.jsp?requestUrl=APPSHOMEPAGE&cancelUrl=http%3A%2F%2Fvis11510.solutionbeacon.net%3A80%2Foa_servlets%2Foracle.apps.fnd.sso.AppsLogin ) . The site asks me to provide Oracle CSI (Customer Support Identifier) any idea how i can obtain one?

    Read the article

  • ActionMailer sent with body unavailable in view

    - by yelvert
    So ive got a ActionMailer mailer class ReportMailer < ActionMailer::Base def notify_doctor_of_updated_document(document) recipients document.user.email_id from "(removed for privacy)" subject "Document #{document.document_number} has been updated and saved as #{document.status}" sent_on Time.now body :document => document end end and the view is Document <%= @document.class %> but when running >> d = Document.last => #<Document id: "fff52d70-7ba2-11de-9b70-001ec9e252ed", document_number: "ABCD1234", procedures_count: 0, user_id: "630", created_at: "2009-07-28 18:18:07", updated_at: "2009-08-30 20:59:41", active: false, facility_id: 94157, status: "incomplete", staff_id: nil, transcriptionist_id: nil, job_length: nil, work_type: nil, transcription_date: nil, non_trans_edit_date: nil, pervasync_flag: true, old_id: nil> >> ReportMailer.deliver_notify_doctor_of_updated_document(d) => #<TMail::Mail port=#<TMail::StringPort:id=0x8185326c> bodyport=#<TMail::StringPort:id=0x8184d6b4>> from the console this is printed in the log Sent mail to (removed for privacy) Date: Tue, 11 May 2010 20:45:14 -0500 From: (removed for privacy) To: (removed for privacy) Subject: Document ABCD1234 has been updated and saved as incomplete Mime-Version: 1.0 Content-Type: multipart/alternative; boundary=mimepart_4bea082ab4ae8_aa4800b81ac13f5 --mimepart_4bea082ab4ae8_aa4800b81ac13f5 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: Quoted-printable Content-Disposition: inline Document NilClass= --mimepart_4bea082ab4ae8_aa4800b81ac13f5--

    Read the article

  • create a folder

    - by rima
    Hi there I wanna know how I can create a folder via Oracle form Builder? is it possible? I mean I wanna create a folder dynamically and after that open it by Internet explorer to customer that customer easily copy his files. I use oracle 6i.

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >