Search Results

Search found 26313 results on 1053 pages for 'job control'.

Page 353/1053 | < Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >

  • On the Fourth Day of the SQL Series...

    - by andyleonard
    Introduction Brent Ozar ( Blog | @BrentO ) has done it again - started something. This time it's The Twelve Days of SQL Series . I was passed the baton from David Stein ( Blog | @made2mentor ) who covered Day 3 with a tribute to his favorite post . And Now, My Selection: I liked Rafael Salas' ( Blog | @RafSalas ) post entitled Denali CTP 1: SSIS Parameters – Bring Them On! Rafael is a friend and fellow SSIS guy. In this post he does a good job pointing out the differences between SSIS Parameters...(read more)

    Read the article

  • SQL Server Installation Checklist

    - by Jonathan Kehayias
    The other night I was asked on Twitter by Todd McDonald (Twitter), for a build list for SQL Server 2005 and 2008.  My initial response was to provide a link to the SQL Server Build List Blog , which documents all of the builds of SQL Server and provides links to the KB articles associated with the builds.  However, this wasn’t what Todd was after, he actually wanted a reference for an installation checklist for SQL Server.  I have a number of these that I use in my job, and they vary...(read more)

    Read the article

  • Accessing Server-Side Data from Client Script: Accessing JSON Data From an ASP.NET Page Using jQuery

    When building a web application, we must decide how and when the browser will communicate with the web server. The ASP.NET WebForms model greatly simplifies web development by providing a straightforward mechanism for exchanging data between the browser and the server. With WebForms, each ASP.NET page's rendered output includes a <form> element that performs a postback to the same page whenever a Button control within the form is clicked, or whenever the user modifies a control whose AutoPostBack property is set to True. On postback, the server sends the entire contents of the web page back to the browser, which then displays this new content. With WebForms we don't need to spend much time or effort thinking about how or when the browser will communicate with the server or how that returned information will be processed by the browser. It just works. While this approach certainly works and has its advantages, it's not without its drawbacks. The primary concern with postback forms is that they require a large amount of information to be exchanged between the browser and the server. Specifically, the browser sends back all of its form fields (including hidden ones, like view state, which may be quite large) and then the server sends back the entire contents of the web page. Granted, there are scenarios where this large quantity of data needs to be exchanged, but in many cases we can use techniques that exchange much less information. However, these techniques necessitate spending more time and effort thinking about how and when to have the browser communicate with the server and intelligently deciding on what information needs to be exchanged. This article, the first in a multi-part series, examines different techniques for accessing server-side data from a browser using client-side script. Throughout this series we will explore alternative ways to expose data on the server so that it can be accessed from the browser using script; we will also examine various tools for communicating with the server from JavaScript, including jQuery and the ASP.NET AJAX library. Read on to learn more! Read More >

    Read the article

  • Bunny Inc. – Episode 1. Mr. CIO meets Mr. Executive Manager

    - by kellsey.ruppel(at)oracle.com
    To make accurate and timely business decisions, executive managers are constantly in need of valuable information that is often hidden in old-style traditional systems. What can Mr. CIO come up with to help make Mr. Executive Manager's job easier at Bunny Inc.? Take a look and discover how you too can make informed business decisions by combining back-office systems with social media. Bunny Inc. -- Episode 1. Mr. CIO meets Mr. Executive ManagerTechnorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, e20bunnies

    Read the article

  • Mock Objects for Unit Testing

    - by user9009
    Hello How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ? Thanks

    Read the article

  • WinXP How to Tunnel LPT over USB

    - by Michael Pruitt
    I have a windows program that accesses a device connected to a LPT (1-3) 25 pin port. The communication is bidirectional, and I suspected the control lines are also accessed directly. I would like to migrate the device to a machine that does not have a LPT port. I saw the dos2usb software, but that takes the output (from a DOS program) and 'prints' it formatted for a specific printer. I need a raw LPT connection, and a cable that provides access to all the control signals. I do have a USB to 36-pin Centronics that may have the extra signals. I use it with a vinyl cutter that doesn't like most of the USB dongles. It comes up as USB001. Would adding and sharing a generic printer, then mapping LPT1 to the share get me closer? Would that work for a parallel port scanner? My preferred solution is a USB cable with a driver that will map it to LPT1, LPT2, or LPT3.

    Read the article

  • OpenVPN on ec2 bridged mode connects but no Ping, DNS or forwarding

    - by michael
    I am trying to use OpenVPN to access the internet over a secure connection. I have openVPN configured and running on Amazon EC2 in bridge mode with client certs. I can successfully connect from the client, but I cannot get access to the internet or ping anything from the client I checked the following and everything seems to shows a successful connection between the vpn client/server and UDP traffic on 1194 [server] sudo tcpdump -i eth0 udp port 1194 (shows UDP traffic after establishing connection) [server] sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] sudo iptables -L -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- ip-W-X-Y-0.us-west-1.compute.internal/24 anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] openvpn.log Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 [localhost] Inactivity timeout (--ping-restart), restarting Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 SIGUSR1[soft,ping-restart] received, client-instance restarting Wed Oct 19 03:41:31 2011 MULTI: multi_create_instance called Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Re-using SSL/TLS context Wed Oct 19 03:41:31 2011 a.b.c.d:57889 LZO compression initialized Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Control Channel MTU parms [ L:1574 D:166 EF:66 EB:0 ET:0 EL:0 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Local Options hash (VER=V4): '360696c5' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Expected Remote Options hash (VER=V4): '13a273ba' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 TLS: Initial packet from [AF_INET]a.b.c.d:57889, sid=dd886604 ab6ebb38 Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=EXAMPLE_CA/[email protected] Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=localhost/[email protected] Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Oct 19 03:41:37 2011 a.b.c.d:57889 [localhost] Peer Connection Initiated with [AF_INET]a.b.c.d:57889 Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 PUSH: Received control message: 'PUSH_REQUEST' Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 SENT CONTROL [localhost]: 'PUSH_REPLY,redirect-gateway def1 bypass-dhcp,route-gateway W.X.Y.Z,ping 10,ping-restart 120,ifconfig W.X.Y.Z 255.255.255.0' (status=1) Wed Oct 19 03:41:40 2011 localhost/a.b.c.d:57889 MULTI: Learn: (IPV6) -> localhost/a.b.c.d:57889 [client] tracert google.com Tracing route to google.com [74.125.71.104] over a maximum of 30 hops: 1 347 ms 349 ms 348 ms PC [w.X.Y.Z] 2 * * * Request timed out. I can also successfully ping the server IP address from the client, and ping google.com from an SSH shell on the server. What am I doing wrong? Here is my config (Note: W.X.Y.Z == amazon EC2 private ipaddress) bridge config on br0 ifconfig eth0 0.0.0.0 promisc up brctl addbr br0 brctl addif br0 eth0 ifconfig br0 W.X.Y.X netmask 255.255.255.0 broadcast W.X.Y.255 up route add default gw W.X.Y.1 br0 /etc/openvpn/server.conf (from https://help.ubuntu.com/10.04/serverguide/C/openvpn.html) local W.X.Y.Z dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;server W.X.Y.0 255.255.255.0 server-bridge W.X.Y.Z 255.255.255.0 W.X.Y.105 W.X.Y.200 ;push "route W.X.Y.0 255.255.255.0" push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" tls-auth ta.key 0 # This file is secret user nobody group nogroup log-append openvpn.log iptables config sudo iptables -A INPUT -i tap0 -j ACCEPT sudo iptables -A INPUT -i br0 -j ACCEPT sudo iptables -A FORWARD -i br0 -j ACCEPT sudo iptables -t nat -A POSTROUTING -s W.X.Y.0/24 -o eth0 -j MASQUERADE echo 1 > /proc/sys/net/ipv4/ip_forward Routing Tables added route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface W.X.Y.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 0.0.0.0 W.X.Y.1 0.0.0.0 UG 0 0 0 br0 C:>route print =========================================================================== Interface List 32...00 ff ac d6 f7 04 ......TAP-Win32 Adapter V9 15...00 14 d1 e9 57 49 ......Microsoft Virtual WiFi Miniport Adapter #2 14...00 14 d1 e9 57 49 ......Realtek RTL8191SU Wireless LAN 802.11n USB 2.0 Net work Adapter 10...00 1f d0 50 1b ca ......Realtek PCIe GBE Family Controller 1...........................Software Loopback Interface 1 11...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 17...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 18...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 36...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #5 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.2.1 10.1.2.201 25 10.1.2.0 255.255.255.0 On-link 10.1.2.201 281 10.1.2.201 255.255.255.255 On-link 10.1.2.201 281 10.1.2.255 255.255.255.255 On-link 10.1.2.201 281 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.1.2.201 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.1.2.201 281 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.1.2.1 Default =========================================================================== C:>tracert google.com Tracing route to google.com [74.125.71.147] over a maximum of 30 hops: 1 344 ms 345 ms 343 ms PC [W.X.Y.221] 2 * * * Request timed out.

    Read the article

  • apache2 force proxy for specific url on a subdomain

    - by Tony G.
    Hi, I have a site that has dynamic virtual subdomains using mod_rewrite, as defined like this: <VirtualHost *:80> ServerName example.com ServerAlias *.example.com DocumentRoot /var/www/example.com/www RewriteEngine on RewriteCond %{HTTP_HOST} ^[^.]+\.examle.com$ RewriteRule ^(.+) %{HTTP_HOST}$1 [C] RewriteRule ^([^.]+)\.example.com(.*) /var/www/example.com/$1$2 </VirtualHost> The problem is that I want a specific url, say subdomain.example.com/CONTROL/ to point back to www.example.com/ using a proxy (not url redirecting). I have tried adding: RewriteRule ^([^.]+)\.example.com/CONTROL(.*) /var/www/example.com/www$2 [P] But that didn't work. Any ideas?

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • How do I setup an FTP server on Windows 7?

    - by Matt Frear
    I'm having trouble getting an FTP server setup on Windows 7. I've added the service using Control Panel - Programs - Turn Windows features on and off. I can see the service has started in Control Panel - Services. But then when I fire up a Windows command-line window, cmd, I get Not connected., C:\Users\mattf>ftp localhost ftp> ls Not connected. ftp> open localhost ftp> ls Not connected. ftp> dir Not connected. ftp> quit C:\Users\mattf> And that's as far as I've got. I have no idea why this isn't working - could it be firewall settings?

    Read the article

  • No icons in top panel notification area

    - by PeterOakland
    (writing here because my reply to a previous post is marked "deleted by diago") Ubuntu 9.10 Karmic I have tried removing the notification area from the top panel and reinstalling it, and still I have nothing in it: no volume control, no connection icon. I have put a notification area on the bottom panel, and the only icon appearing is the connection one, looks like two plugs meeting, or a large resistor in line, or two toilet plungers face to face, so to speak. How can I get this up top where it used to be, along with the volume control icon? Thanks for any help.

    Read the article

  • 2011 PASS Board Applicants: Adam Jorgensen

    - by andyleonard
    Introduction I am interviewing 2011 PASS Board Nominee Applicants. As listed on the PASS Board Elections site the applicants are: Rob Farley Geoff Hiten Adam Jorgensen Denise McInerney Sri Sridharan Kendal Van Dyke I'm asking everyone the same questions and blogging the responses in the order received. Adam Jorgensen is next up: Interview With Adam Jorgensen 1. What's your day job? I am currently the President of Pragmatic Works Consulting ( http://www.pragmaticworks.com ). I also participate with...(read more)

    Read the article

  • Oracle Releases New Mainframe Re-Hosting in Oracle Tuxedo 11g

    - by Jason Williamson
    I'm excited to say that we've released our next generation of Re-hosting in 11g. In fact I'm doing some hands-on labs now for our Systems Integrators in Italy in a couple of weeks and targeting Latin America next month. If you are an SI, or Rehosting firm and are looking to become an Oracle Partner or get a better understanding of Tuxedo and how to use the workbench for rehosting...drop me a line. Oracle Tuxedo Application Runtime for CICS and Batch 11g provides a CICS API emulation and Batch environment that exploits the full range of Oracle Tuxedo's capabilities. Re-hosted applications run in a multi-node, grid environment with centralized production control. Also, enterprise integration of CICS application services benefits from an open and SOA-enabled framework. Key features include: CICS Application Runtime: Can run IBM CICS applications unchanged in an application grid, which enables the distribution of large workloads across multiple processors and nodes. This simplifies CICS administration and can scale to over 100,000 users and over 50,000 transactions per second. 3270 Terminal Server: Protects business users from change through support for tn3270 terminal emulation. Distributed CICS Resource Management: Simplifies deployment and administration by allowing customers to run CICS regions in a distributed configuration. Batch Application Runtime: Provides robust IBM JES-like job management that enables local or remote job submissions. In addition, distributed batch initiators can enable parallelization of jobs and support fail-over, shortening the batch window and helping to meet stringent SLAs. Batch Execution Environment: Helps to run IBM batch unchanged and also supports JCL functionality and all common batch utilities. Oracle Tuxedo Application Rehosting Workbench 11g provides a set of automated migration tools integrated around a central repository. The tools provide high precision which results in very low error rates and the ability to handle large applications. This enables less expensive, low-risk migration projects. Key capabilities include: Workbench Repository and Cataloguer: Ensures integrity of the migrated application assets through full dependency checking. The Cataloguer generates and maintains all relevant meta-data on source and target components. File Migrator: Supports reliable migration of datasets and flat files to an ISAM or Oracle Database 11g. This is done through the automated migration utilities for data unloading, reloading and validation. It also generates logical access functions to shield developers from data repository changes. DB2 Migrator: Similarly, this tool automates the migration of DB2 schema and data to Oracle Database 11g. COBOL Migrator: Supports migration of IBM mainframe COBOL assets (OLTP and Batch) to open systems. Adapts programs for compiler dialects and data access variations. JCL Migrator: Supports migration of IBM JCL jobs to a Tuxedo ART environment, maintaining the flow and characteristics of batch jobs.

    Read the article

  • Change environment variables as standard user (Windows 7)

    - by SealedSun
    When clicking on "Advanced system settings", I need to login as the administrator and hence only edit the administrators environment variables (in addition to the machine wide ones). How do I edit the environment variables of a standard user? Details With the migration to Windows 7, I decided to work as a standard user instead of an unprivileged administrator. Works well so far but I encountered a tiny problem: When I try to change per user environment variables via the control panel I have to login as an administrator. But since I run that part of the control panel as the administrator I can only edit the administrators variables. How am I supposed to edit my own environment variables? Without resorting to extreme measures, such as editing the registry (as suggested in "Is there any command line tool that can be used to edit environment variables in Windows?" )

    Read the article

  • 2011 PASS Board Applicants: Kendal Van Dyke

    - by andyleonard
    Introduction I am interviewing 2011 PASS Board Nominee Applicants. As listed on the PASS Board Elections site the applicants are: Rob Farley Geoff Hiten Adam Jorgensen Denise McInerney Sri Sridharan Kendal Van Dyke I'm asking everyone the same questions and blogging the responses in the order received. Kendal Van Dyke is next up: Interview With Kendal Van Dyke 1. What's your day job? I'm a Senior Technical Consultant with Insource Technologies ( http://www.insource.com/ ) in Houston, TX (but I work...(read more)

    Read the article

  • Bajaj Electical Lites up India with Oracle SCM Apps!

    - by [email protected]
    The Executive Team from Bajaj Electricals/India did a fantastic job of summarizing their Oracle Experience in a 5 minute clip below. Their Chairman, Mr Baja, along with Ram/GM and several other business leaders highlighted the decision to select Oracle and Oracle consulting to connect thier business applications to meet the needs of a growing country.  Its a clip worth listening to and provides keen insight into both the vision and results. http://www.oracle.com/pls/ebn/live_viewer.main?p_direct=yes&p_shows_id=8226860

    Read the article

  • hosting people asking for my account username and password to enable curl and socket function only for me

    - by Jayapal Chandran
    I have hosted my site in a shared environment. My hosting people disabled socket function all together. and they said that we can enable only for you if i given a written statement. I did but they asked for my control panel login details so they will run some kind of script to enable it. Is it right for the hosting company to ask for credentials. They have the total control so why cant they do it? Edit: Before six months many websites in their server got hacked. So they think it would be because of socket functions and had disabled it. They say they can enable it for specific users who do programming using that and that is by email request.

    Read the article

  • SOA Governance Starts with People and Processes

    - by Jyothi Swaroop
    While we all agree that SOA Governance is about People, Processes and Technology. Some experts are of the opinion that SOA Governance begins with People and Processes but needs to be empowered with technology to achieve the best results. Here's an interesting piece from David Linthicum on eBizq: In the world of SOA, the concept of SOA governance is getting a lot of attention. However, how SOA governance is defined and implemented really depends on the SOA governance vendor who just left the building within most enterprises. Indeed, confusion is a huge issue when considering SOA governance, and the core issues are more about the fundamentals of people and processes, and not about the technology. SOA governance is a concept used for activities related to exercising control over services in an SOA, including tracking the services, monitoring the service, and controlling changes made to the services, simple put. The trouble comes in when SOA governance vendors attempt to define SOA governance around their technology, all with different approaches to SOA governance. Thus, it's important that those building SOAs within the enterprise take a step back and understand what really need to support the concept of SOA governance. The value of SOA governance is pretty simple. Since services make up the foundation of an SOA, and are at their essence the behavior and information from existing systems externalized, it's critical to make sure that those accessing, creating, and changing services do so using a well controlled and orderly mechanism. Those of you, who already have governance in place, typically around enterprise architecture efforts, will be happy to know that SOA governance does not replace those processes, but becomes a mechanism within the larger enterprise governance concept. People and processes are first thing on the list to get under control before you begin to toss technology at this problem. This means establishing an understanding of SOA governance within the team members, including why it's important, who's involved, and the core processes that are to be follow to make SOA governance work. Indeed, when creating the core SOA governance strategy should really be independent of the technology. The technology will change over the years, but the core processes and discipline should be relatively durable over time.

    Read the article

  • Nginx location issue

    - by dave
    I'm trying to set a longer (30 day) 'expires' header for my (images only) in the /misc-stuff/ directory. This is what I'm using for my site : # Serve static files directly from nginx location ~* \.(jpg|jpeg|gif|png|bmp|ico|pdf|flv|swf|exe|html|htm|txt|css|js) { add_header Cache-Control public; add_header Cache-Control must-revalidate; expires 7d; } I want to be able to keep that code in to handle regular site images, but create a new block to handle the /misc-stuff/ directory. I have tried : location ^~ /misc-stuff/ { ... } The problem I'm having now is that my backup .php files in that directory show up as plain text if someone tries to access it. How do I set it up so ONLY .gif images in the /misc-stuff/ directory are effected?

    Read the article

  • What micro web-framework has the lowest overhead but includes templating

    - by Simon Martin
    I want to rewrite a simple small (10 page) website and besides a contact form it could be written in pure html. It is currently built with classic asp and Dreamweaver templates. The reason I'm not simply writing 10 html pages is that I want to keep the layout all in 1 place so would need either includes or a masterpage. I don't want to use Dreamweaver templates, or batch processing (like org-mode) because I want to be able to edit using notepad (or Visual Studio) because occasionally I might need to edit a file on the server (Go Daddy's IIS admin interface will let me edit text). I don't want to use ASP.NET MVC or WebForms (which I use in my day job) because I don't need all the overhead they bring with them when essentially I'm serving up 9 static files, 1 contact form and 1 list of clubs (that I aim to use jQuery to filter). The shared hosting package I have on Go Daddy seems to take a long time to spin up when serving aspx files. Currently the clubs page is driven from an MS SQL database that I try to keep up to date by manually checking the dojo locator on the main HQ pages and editing the entries myself, this is again way over the top. I aim to get a text file with the club details (probably in JSON or xml format) and use that as the source for the clubs page. There will need to be a bit of programming for this as the HQ site is unable to provide an extract / feed so something will have to scrape the site periodically to update my clubs persistence file. I'd like that to be automated - but I'm happy to have that triggered on a visit to the clubs page so I don't need to worry about scheduling a job. I would probably have a separate process that updates the persistence that has nothing to do with the rest of the site. Ideally I'd like to use Mercurial (or git) to publish, I know Bitbucket (and github) both serve static page sites so they wouldn't work in this scenario (dynamic pages and a contact form) but that's the model I'd like to use if there is such a thing. My requirements are: Simple templating system, 1 place to define header, footers, menu etc., that can be edited using just notepad. Very minimal / lightweight framework. I don't need a monster for 10 pages Must run either on IIS7 (shared Go Daddy Windows hosting) or other free host

    Read the article

  • Google Compute Engine Office Hours: August 22, 2012

    Google Compute Engine Office Hours: August 22, 2012 Office hours with the Google Compute Engine Team on August 22, 2012. The slides can be viewed here: goo.gl The tech talk portion of this session was about OAuth and Service Accounts, an area which the Google Compute Engine team has done a great job simplifying. From: GoogleDevelopers Views: 80 7 ratings Time: 52:42 More in Science & Technology

    Read the article

  • Jobs, jobs, jobs, jobs: in Java technology (3/2012)

    - by hinkmond
    If you're looking for an opportunity to work on the latest Java technology, we have some job openings on our team. We are currently planning some pretty cool projects that you would work on! See Java Technology Jobs at Oracle: Req IRC1722640 Req IRC1722647 Req IRC1722654 So, check it out. You'll get the opportunity to program Java devices, work on cutting edge embedded platforms, and a get an assigned free blog at the Oracle blog site too. Won't that be fun? Hinkmond

    Read the article

  • Specialized course in Web programming or Generalized course in Computer Science?

    - by Sugan
    I am planning to do my masters in Computer Science in UK. I am interested in Web programming. What should I opt for. A general course in Computer Science or in some specialized course in Web Programming. If web programming, are there a lot of colleges offering these courses. Are there a lot of job offers there in UK. I am not sure whether I can ask this question here. If not point me where I can ask this question.

    Read the article

  • links for 2010-05-10

    - by Bob Rhubart
    Announcing the MOS WCI "Community" (World of WebCenter Interaction) In this community you'll find a product related discussion forum moderated by Oracle WebCenter Interaction support engineers, recommended tips and tricks, links to knowledge base articles and best practices for setting up and administering up your environment. We hope you'll take a minute to have a look through the community. (tags: oracle otn webcenter enterprise2.0) Jason Williamson: Tuxedo Runtime for CICS and Batch Webcast "The notion that mainframes can be rehosted on open system is pretty well accepted. There are still some hold out CxO's who don't believe it, but those guys typically are not really looking to migrate anyway and don't take an honest look at the case studies, history and TPC reports." Jason Williamson (tags: oracle otn entarch tuxedo) Tom Hofte: Analyzing Out-Of-Memory issues in WebLogic 10.3.3 with JRockit 4.0 Flight Recorder Tom Hofte shows you "how to capture automatically an overall WLS system image, including a JFR image, after an out-of-memory (OOM) exception has occured in the JVM hosting WLS 10.3.3." (tags: oracle otn weblogic soa java) Install Control Center Agent on Oracle Application Server (Oracle Warehouse Builder (OWB) Weblog) Qianqian Wu show you how to Install and Configure the Application Server; Deploy the Control Center Agent to the Application Server; Optional Configuration Tasks (tags: oracle otn bi datawarehousing) Frank Buytendijk: BI and EPM Landscape "Organizations are getting more serious about ecosystem thinking. They do not evaluate single tools anymore for different application areas, but buy into a complete ecosystem of hardware, software and services. The best ecosystem is the one that offers the most options, in environments where the uncertainty is high and investments are hard to reverse. The key to successfully managing such an environment is middleware, and BI and EPM become increasingly middleware intensive. In fact, given the horizontal nature of BI and EPM, sitting on top of all business functions and applications, you could call them 'upperware.'" -- Frank Buytendijk (tags: oracle otn enterprisearchitecture bi)

    Read the article

< Previous Page | 349 350 351 352 353 354 355 356 357 358 359 360  | Next Page >