Search Results

Search found 41250 results on 1650 pages for 'oracle database'.

Page 620/1650 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • VSDB to SSDT Part 1 : Converting projects and trimming excess files

    - by Etienne Giust
    Visual Studio 2012 introduces a change regarding Database Projects : they now use the SSDT technology, which means old VS2010 database projects (VSDB projects) need to be converted. Hopefully, VS2012 does that for you and it is quite painless, but in my case some unnecessary artifacts from the old project were left in place.  Also, when reopening the solution, database projects appeared unconverted even if I had converted them in the previous session and saved the solution.   Converting the project(s) When opening your Visual Studio 2010 solution with Visual Studio 2012, every standard project should be converted by default, but Visual Studio will ask you about your database projects : “Functional changes required Visual Studio will automatically make functional changes to the following projects in order to open them. The project behavior will change as a result. You will be able to open these projects in this version and Visual Studio 2010 SP1.” If you accept, your project is converted. And it should compile with no errors right away except if you have dependencies to dbschema files which are no longer supported.   The output of a SSDT project is a dacpac file which replaces the dbschema file you were accustomed to. References to dacpac files can be added to SSDT projects in the same fashion references to dbschema could be added to VSDB projects.   Cleaning up You will notice that your project file is now a sqlproj file but the old dbproj is still here. In fact at that point you can still reopen the solution in Visual Studio 2010 and everything should show up.   If like me you plan on using VS2012 exclusively, you can get rid of the following files which are still on your disk and in your source control : the dbproj and dbproj.vspscc files Properties/Database.sqlcmdvars Properties/Database.sqldeployment Properties/Database.sqlpermissions Properties/Database.sqlsettings   You might wonder where the information which used to be in the Properties files is now stored. Permissions : a Permissions.sql was created at the root level of your project. Note that when you create a new database project and import a database using the Schema Compare capabilities from Visual Studio, imported table and stored procedure definition files will hold the permission information (along with constraints and, indexes) SQLVars : they are defined inside the publish.xml files Deployment : they are also in the publish.xml files Settings : I was unable to find where those are now. I suppose they are not defined anymore   But Visual Studio still says my database projects should be converted ! I had this error upon closing and then re-opening the solution : my database projects would appear unconverted even though I did all the necessary steps previously.   Easy solution : remove those projects from the solution and add them again (the sqlproj files).   More For those who run into problems when converting from VSDB to SSDT, I suggest reading the following post : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/top-vsdb-gt-ssdt-project-conversion-issues.aspx   Also interesting, is a side by side comparison of VSDB and SSDT project features : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/sql-server-data-tools-ctp4-vs-vs2010-database-projects.aspx

    Read the article

  • If I build a solution in C#, how do I add a database to the final program?

    - by DevX
    Apologies if this is a very basic question, but I am preparing to create an executable for a Visual Studio C# Application. (My first time!) My application uses a database that I'm currently storing in SQL Server. This works fine while I'm coding since I created the database manually on my computer. I see that VS2008 has created an .exe in my bin/debug folder, but how do I ensure that any new users (that doesn't already have the database) doing a fresh install gets the database also?

    Read the article

  • How to manage transaction for database and file system in jee environment?

    - by Michael Lu
    I store file’s attributes (size, update time…) in database. So the problem is how to manage transaction for database and file. In jee environment, JTA is just able to manage database transaction. In case, updating database is successful but file operation fails, should I write file-rollback method for this? Moreover, file operation in EJB container violates EJB spec. What’s your opinion? Thank!

    Read the article

  • How to read from database and write into text file with C#?

    - by user147685
    How to read from database and write into text file? I want to write/copy (not sure what to call) the record inside my database into a text file. One row record in database is equal to one line in the text file. I'm having no problem in database. For creating text file, it mentions FileStream and StreamWriter. Which one should I use?

    Read the article

  • how to reloadData in tableView when tableview access data from database.

    - by Ajeet Kumar Yadav
    I am new in iphone i am developing a application that take value from data base and display data in tableview. in this application we save data from one data table to other data table this is when add first time work and when we do second time application is crash. how to solve this problem i am not understand code is given bellow my appdelegate code for insert value from one table to other is given bellow -(void)sopinglist { //////databaseName= @"SanjeevKapoor.sqlite"; databaseName =@"AjeetTest.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath =[documentsDir stringByAppendingPathComponent:databaseName]; [self checkAndCreateDatabase]; list1 = [[NSMutableArray alloc] init]; sqlite3 *database; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { if(addStmt == nil) { ///////////const char *sql = "insert into Dataa(item) Values(?)"; const char *sql = " insert into Slist select * from alootikki"; ///////////// const char *sql =" Update Slist ( Incredients, Recipename,foodtype) Values(?,?,?)"; if(sqlite3_prepare_v2(database, sql, -1, &addStmt, NULL) != SQLITE_OK) NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(database)); } /////for( NSString * j in k) sqlite3_bind_text(addStmt, 1, [k UTF8String], -1, SQLITE_TRANSIENT); //sqlite3_bind_int(addStmt,1,i); // sqlite3_bind_text(addStmt, 1, [coffeeName UTF8String], -1, SQLITE_TRANSIENT); // sqlite3_bind_double(addStmt, 2, [price doubleValue]); if(SQLITE_DONE != sqlite3_step(addStmt)) NSAssert1(0, @"Error while inserting data. '%s'", sqlite3_errmsg(database)); else //SQLite provides a method to get the last primary key inserted by using sqlite3_last_insert_rowid coffeeID = sqlite3_last_insert_rowid(database); //Reset the add statement. sqlite3_reset(addStmt); // sqlite3_clear_bindings(detailStmt); //} } sqlite3_finalize(addStmt); addStmt = nil; sqlite3_close(database); } And the table View code for access data from database is given bellow SanjeevKapoorAppDelegate *appDelegate =(SanjeevKapoorAppDelegate *)[[UIApplication sharedApplication] delegate]; [appDelegate sopinglist]; ////[appDelegate recpies]; /// NSArray *a =[[appDelegate list1] componentsJoinedByString:@","]; k= [[appDelegate list1] componentsJoinedByString:@","];

    Read the article

  • Entity framework: Need a easy going, clean database migration solution.

    - by user469652
    I'm using entity framework model first development, and I need to do database migration often, The EF database generation power pack doesn't help a lot, because that data migration never worked here. The database migration here I mean, change the model, and then I can update the existing database from the model, but creating a new one. Is there any free of charge tool here invented here yet? Or would this going to be a new feature of next EF release? PS: I love django's ORM.

    Read the article

  • How could i send a html5 sqlite database from an iphone ?

    - by Edoc
    Hi everybody, i'm developing a webapp for iphone/ipod and using the client's side database. I want my webapp to work mainly offline and allow users to fill the database and at a certain point i'd like to send my database to a server to process and store all the informations. The problem is i can't find a proper way to send my tables to a server in order to fill its database with the one sent from the iphone. Does anyone has a clue ?

    Read the article

  • UAT Testing for SOA 10G Clusters

    - by [email protected]
    A lot of customers ask how to verify their SOA clusters and make them production ready. Here is a list that I recommend using for 10G SOA Clusters. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} Test cases for each component - Oracle Application Server 10G General Application Server test cases This section is going to cover very General test cases to make sure that the Application Server cluster has been set up correctly and if you can start and stop all the components in the server via opmnct and AS Console. Test Case 1 Check if you can see AS instances in the console Implementation 1. Log on to the AS Console --> check to see if you can see all the nodes in your AS cluster. You should be able to see all the Oracle AS instances that are part of the cluster. This means that the OPMN clustering worked and the AS instances successfully joined the AS cluster. Result You should be able to see if all the instances in the AS cluster are listed in the EM console. If the instances are not listed here are the files to check to see if OPMN joined the cluster properly: $ORACLE_HOME\opmn\logs{*}opmn.log*$ORACLE_HOME\opmn\logs{*}opmn.dbg* If OPMN did not join the cluster properly, please check the opmn.xml file to make sure the discovery multicast address and port are correct (see this link  for opmn documentation). Restart the whole instance using opmnctl stopall followed by opmnctl startall. Log on to AS console to see if instance is listed as part of the cluster. Test Case 2 Check to see if you can start/stop each component Implementation Check each OC4J component on each AS instanceStart each and every component through the AS console to see if they will start and stop.Do that for each and every instance. Result Each component should start and stop through the AS console. You can also verify if the component started by checking opmnctl status by logging onto each box associated with the cluster Test Case 3 Add/modify a datasource entry through AS console on a remote AS instance (not on the instance where EM is physically running) Implementation Pick an OC4J instanceCreate a new data-source through the AS consoleModify an existing data-source or connection pool (optional) Result Open $ORACLE_HOME\j2ee\<oc4j_name>\config\data-sources.xml to see if the new (and or the modified) connection details and data-source exist. If they do then the AS console has successfully updated a remote file and MBeans are communicating correctly. Test Case 4 Start and stop AS instances using opmnctl @cluster command Implementation 1. Go to $ORACLE_HOME\opmn\bin and use the opmnctl @cluster to start and stop the AS instances Result Use opmnctl @cluster status to check for start and stop statuses.  HTTP server test cases This section will deal with use cases to test HTTP server failover scenarios. In these examples the HTTP server will be talking to the BPEL console (or any other web application that the client wants), so the URL will be _http://hostname:port\BPELConsole Test Case 1  Shut down one of the HTTP servers while accessing the BPEL console and see the requested routed to the second HTTP server in the cluster Implementation Access the BPELConsoleCheck $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like this 1xx.2x.2xx.xxx [24/Mar/2009:16:04:38 -0500] "GET /BPELConsole=System HTTP/1.1" 200 15 After you have figured out which HTTP server this is running on, shut down this HTTP server by using opmnctl stopproc --> this is a graceful shutdown.Access the BPELConsole again (please note that you should have a LoadBalancer in front of the HTTP server and configured the Apache Virtual Host, see EDG for steps)Check $ORACLE_HOME\Apache\Apache\logs\access_log --> check for the timestamp and the URL that was accessed by the user. Timestamp and URL would look like above Result Even though you are shutting down the HTTP server the request is routed to the surviving HTTP server, which is then able to route the request to the BPEL Console and you are able to access the console. By checking the access log file you can confirm that the request is being picked up by the surviving node. Test Case 2 Repeat the same test as above but instead of calling opmnctl stopproc, pull the network cord of one of the HTTP servers, so that the LBR routes the request to the surviving HTTP node --> this is simulating a network failure. Test Case 3 In test case 1 we have simulated a graceful shutdown, in this case we will simulate an Apache crash Implementation Use opmnctl status -l to get the PID of the HTTP server that you would like forcefully bring downOn Linux use kill -9 <PID> to kill the HTTP serverAccess the BPEL console Result As you shut down the HTTP server, OPMN will restart the HTTP server. The restart may be so quick that the LBR may still route the request to the same server. One way to check if the HTTP server restared is to check the new PID and the timestamp in the access log for the BPEL console. BPEL test cases This section is going to cover scenarios dealing with BPEL clustering using jGroups, BPEL deployment and testing related to BPEL failover. Test Case 1 Verify that jGroups has initialized correctly. There is no real testing in this use case just a visual verification by looking at log files that jGroups has initialized correctly. Check the opmn log for the BPEL container for all nodes at $ORACLE_HOME/opmn/logs/<group name><container name><group name>~1.log. This logfile will contain jGroups related information during startup and steady-state operation. Soon after startup you should find log entries for UDP or TCP.Example jGroups Log Entries for UDPApr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets ·         INFO: sockets will use interface 144.25.142.172·          ·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.UDP createSockets·          ·         INFO: socket information:·          ·         local_addr=144.25.142.172:1127, mcast_addr=228.8.15.75:45788, bind_addr=/144.25.142.172, ttl=32·         sock: bound to 144.25.142.172:1127, receive buffer size=64000, send buffer size=32000·         mcast_recv_sock: bound to 144.25.142.172:45788, send buffer size=32000, receive buffer size=64000·         mcast_send_sock: bound to 144.25.142.172:1128, send buffer size=32000, receive buffer size=64000·         Apr 3, 2008 6:30:37 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·          ·         GMS: address is 144.25.142.172:1127·          ------------------------------------------------------- Example jGroups Log Entries for TCPApr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.172:7900·          ·         Apr 3, 2008 6:23:39 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         -------------------------------------------------------·         GMS: address is 144.25.142.172:7900------------------------------------------------------- In the log below the "socket created on" indicates that the TCP socket is established on the own node at that IP address and port the "created socket to" shows that the second node has connected to the first node, matching the logfile above with the IP address and port.Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable start ·         INFO: server socket created on 144.25.142.173:7901·          ·         Apr 3, 2008 6:25:40 PM org.collaxa.thirdparty.jgroups.protocols.TP$DiagnosticsHandler bindToInterfaces·          ·         ------------------------------------------------------·         GMS: address is 144.25.142.173:7901·         -------------------------------------------------------·         Apr 3, 2008 6:25:41 PM org.collaxa.thirdparty.jgroups.blocks.ConnectionTable getConnectionINFO: created socket to 144.25.142.172:7900  Result By reviewing the log files, you can confirm if BPEL clustering at the jGroups level is working and that the jGroup channel is communicating. Test Case 2  Test connectivity between BPEL Nodes Implementation Test connections between different cluster nodes using ping, telnet, and traceroute. The presence of firewalls and number of hops between cluster nodes can affect performance as they have a tendency to take down connections after some time or simply block them.Also reference Metalink Note 413783.1: "How to Test Whether Multicast is Enabled on the Network." Result Using the above tools you can confirm if Multicast is working  and whether BPEL nodes are commnunicating. Test Case3 Test deployment of BPEL suitcase to one BPEL node.  Implementation Deploy a HelloWorrld BPEL suitcase (or any other client specific BPEL suitcase) to only one BPEL instance using ant, or JDeveloper or via the BPEL consoleLog on to the second BPEL console to check if the BPEL suitcase has been deployed Result If jGroups has been configured and communicating correctly, BPEL clustering will allow you to deploy a suitcase to a single node, and jGroups will notify the second instance of the deployment. The second BPEL instance will go to the DB and pick up the new deployment after receiving notification. The result is that the new deployment will be "deployed" to each node, by only deploying to a single BPEL instance in the BPEL cluster. Test Case 4  Test to see if the BPEL server failsover and if all asynch processes are picked up by the secondary BPEL instance Implementation Deploy a 2 Asynch process: A ParentAsynch Process which calls a ChildAsynchProcess with a variable telling it how many times to loop or how many seconds to sleepA ChildAsynchProcess that loops or sleeps or has an onAlarmMake sure that the processes are deployed to both serversShut down one BPEL serverOn the active BPEL server call ParentAsynch a few times (use the load generation page)When you have enough ParentAsynch instances shut down this BPEL instance and start the other one. Please wait till this BPEL instance shuts down fully before starting up the second one.Log on to the BPEL console and see that the instance were picked up by the second BPEL node and completed Result The BPEL instance will failover to the secondary node and complete the flow ESB test cases This section covers the use cases involved with testing an ESB cluster. For this section please Normal 0 false false false EN-CA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; mso-bidi-font-size:12.0pt; font-family:"Calibri","sans-serif"; mso-fareast-language:EN-US;} follow Metalink Note 470267.1 which covers the basic tests to verify your ESB cluster.

    Read the article

  • Full-text indexing? You must read this

    - by Kyle Hatlestad
    For those of you who may have missed it, Peter Flies, Principal Technical Support Engineer for WebCenter Content, gave an excellent webcast on database searching and indexing in WebCenter Content.  It's available for replay along with a download of the slidedeck.  Look for the one titled 'WebCenter Content: Database Searching and Indexing'. One of the items he led with...and concluded with...was a recommendation on optimizing your search collection if you are using full-text searching with the Oracle database.  This can greatly improve your search performance.  And this would apply to both Oracle Text Search and DATABASE.FULLTEXT search methods.  Peter describes how a collection can become fragmented over time as content is added, updated, and deleted.  Just like you should defragment your hard drive from time to time to get your files placed on the disk in the most optimal way, you should do the same for the search collection. And optimizing the collection is just a simple procedure call that can be scheduled to be run automatically.   beginctx_ddl.optimize_index('FT_IDCTEXT1','FULL', parallel_degree =>'1');end; When I checked my own test instance, I found my collection had a row fragmentation of about 80% After running the optimization procedure, it went down to 0% The knowledgebase article On Index Fragmentation and Optimization When Using OracleTextSearch or DATABASE.FULLTEXT [ID 1087777.1] goes into detail on how to check your current index fragmentation, how to run the procedure, and then how to schedule the procedure to run automatically.  While the article mentions scheduling the job weekly, Peter says he now is recommending this be run daily, especially on more active systems. And just as a reminder, be sure to involve your DBA with your WebCenter Content implementation as you go to production and over time.  We recently had a customer complain of slow performance of the application when it was discovered the database was starving for memory.  So it's always helpful to keep a watchful eye on your database.

    Read the article

  • Should I cache the data or hit the database?

    - by JD01
    I have not worked with any caching mechanisms and was wondering what my options are in the .net world for the following scenario. We basically have a a REST Service where the user passes an ID of a Category (think folder) and this category may have lots of sub categories and each of the sub categories could have 1000 of media containers (think file reference objects) which contain information about a file that may be on a NAS or SAN server (files are videos in this case). The relationship between these categories is stored in a database together with some permission rules and meta data about the sub categories. So from a UI perspective we have a lazy loaded tree control which is driven by the user by clicking on each sub folder (think of Windows explorer). Once they come to a URL of the video file, they then can watch the video. The number of users could grow into the 1000s and the sub categories and videos could be in the 10000s as the system grows. The question is should we carry on the way it is currently working where each request hits the database or should we think about caching the data? We are on using IIS 6/7 and Asp.net.

    Read the article

  • Does Apache ever give incorrect "out of threads" errors?

    - by Eli Courtwright
    Lately our Apache web server has been giving us this error multiple times per day: [Tue Apr 06 01:07:10 2010] [error] Server ran out of threads to serve requests. Consider raising the ThreadsPerChild setting We raised our ThreadsPerChild setting from 50 to 100, but we still get the error. Our access logs indicate that these errors never even happen at periods of high load. For example, here's an excerpt from our access log (ip addresses and some urls are edited for privacy). As you can see, the above error happened at 1:07 and only a small handful of requests occurred in the several minutes leading up to the error: 99.88.77.66 - - [06/Apr/2010:00:59:33 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-icons_222222_256x240.png HTTP/1.1" 304 - 99.88.77.66 - - [06/Apr/2010:00:59:34 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_dadada_1x400.png HTTP/1.1" 200 111 99.88.77.66 - - [06/Apr/2010:00:59:34 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_dadada_1x400.png HTTP/1.1" 200 111 99.88.77.66 - mpeu [06/Apr/2010:00:59:40 -0400] "GET /some/dynamic/content HTTP/1.1" 200 145049 55.44.33.22 - mpeu [06/Apr/2010:01:06:56 -0400] "GET /other/dynamic/content HTTP/1.1" 200 12311 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/jquery-ui-1.7.1.custom.css HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-1.3.2.min.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-ui-1.7.1.custom.min.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/jquery.tablesorter.min.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/date.js HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image1.gif HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image2.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image3.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image4.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image5.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image6.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:56 -0400] "GET /WebRepository/pdfs/image7.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/pdfs/image8.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/pdfs/image9.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/pdfs/imageA.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:57 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_flat_75_ffffff_40x100.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:59 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_highlight-soft_75_cccccc_1x100.png HTTP/1.1" 304 - 55.44.33.22 - - [06/Apr/2010:01:06:59 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_e6e6e6_1x400.png HTTP/1.1" 200 110 55.44.33.22 - - [06/Apr/2010:01:06:59 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/images/ui-bg_glass_75_e6e6e6_1x400.png HTTP/1.1" 200 110 11.22.33.44 - mpeu [06/Apr/2010:01:18:03 -0400] "GET /other/dynamic/content HTTP/1.1" 200 12311 11.22.33.44 - - [06/Apr/2010:01:18:03 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-1.3.2.min.js HTTP/1.1" 304 - 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/css/smoothness/jquery-ui-1.7.1.custom.css HTTP/1.1" 200 27374 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/jquery/jquery-ui-1.7.1.custom/js/jquery-ui-1.7.1.custom.min.js HTTP/1.1" 304 - 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/jquery.tablesorter.min.js HTTP/1.1" 200 12795 11.22.33.44 - - [06/Apr/2010:01:18:04 -0400] "GET /WebRepository/date.js HTTP/1.1" 200 25809 For what it's worth, we're running the version of Apache that ships with Oracle 10g (some 2.0 version), and we're using mod_plsql to generate our dynamic content. Since the Apache server runs as a separate process and the database doesn't record any problems when this error occurs, I'm doubtful that Oracle is the problem. Unfortunately, the errors are freaking out our sysadmins, who are inclined to blame any and all problems which occur with the server on this error. Is this a known bug in Apache that I simply haven't been able to find any reference to through Google?

    Read the article

  • MySQL on Windows - Why, Where and How

    - by bertrand.matthelie(at)oracle.com
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Over the years Windows has become a major development and deployment platform for MySQL. As a matter of fact, Windows consistently ranks as the #1 development platform in our surveys, and now also ranks higher than any Linux distribution as a deployment platform among MySQL Community Edition users.   We've made various technical resources available in our MySQL on Windows Resource Center including articles, whitepapers and archived webinars. MySQL users are also sharing their experiences and writing how-to articles, and it's great to see former MySQL/Sun/Oracle employees still contributing! Thanks Anders for a recent step-by-step part 1 article on working with MySQL on Windows.   We also got feedback from customers wishing to get higher-level information about MySQL on Windows, to help them and others in their organizations better understand:   ·       Why is the world's most popular open source database so popular on Windows?   ·       What are the applications for which one should consider MySQL on Microsoft's platform?   ·       How should Windows shops relying on Microsoft databases get going with MySQL?   Those are the questions we aim to answer in our guide "MySQL on Windows - Why, Where and How", that you can download here.

    Read the article

  • Virtual Box - How to open a .VDI Virtual Machine

    - by [email protected]
     How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlor http://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager. 2. Click on Actions ? Add and select the .VDI file Click "Ok"3. Now we can register the new Virtual Machine - Click New, and Click Next4. Write down a Name for the virtual Machine a proceed to select a Operating System and Version. (In this case it is a Linux (Oracle Enterprise Linux or RedHat)Click Next5. Select the memory amount base for the Virtual Machine (Minimal 1280 for our case) - Click Next6. Select the Disk 11GR2_OEL5_32GB.vdi it was added in the virtual media manager in the step 2. Dont forget let selected Boot hard Disk (Primary Master) . Given it is the only disk assigned to the virtual machine.Click Next7. Click Finish8. This step is important. Once you have click on the settings Button.9. On General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.10. Now Click on System, and proceed to assign the correct memory (If you did not before) Note: Enable "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.Define the processors for the Virtual machine. If you processor is dual core choose 211. Select the video memory amount you want to assign to the Virtual Machine 12. Associated more storage disk to the Virtual machine, if you have more VDI files. (Not our case)The disk must be selected as IDE Primary Master. 13. Well you can verify the other options, but with these changes you will be able to start the VM.Note: Sometime the VM owner may share some instructions, if so follow his instructions.14. Finally Start the Virtual Machine (Click > Start)

    Read the article

  • SQL SERVER – Identifying guest User using Policy Based Management

    - by pinaldave
    If you are following my recent blog posts, you may have noticed that I’ve been writing a lot about Guest User in SQL Server. Here are all the blog posts which I have written on this subject: SQL SERVER – Disable Guest Account – Serious Security Issue SQL SERVER – Force Removing User from Database – Fix: Error: Could not drop login ‘test’ as the user is currently logged in SQL SERVER – Detecting guest User Permissions – guest User Access Status SQL SERVER – guest User and MSDB Database – Enable guest User on MSDB Database One of the requests I received was whether we could create a policy that would prevent users unable guest user in user databases. Well, here is a quick tutorial to answer this. Let us see how quickly we can do it. Requirements Check if the guest user is disabled in all the user-created databases. Exclude master, tempdb and msdb database for guest user validation. We will create the following conditions based on the above two requirements: If the name of the user is ‘guest’ If the user has connect (@hasDBAccess) permission in the database Check in All user databases, except: master, tempDB and msdb Once we create two conditions, we will create a policy which will validate the conditions. Condition 1: Is the User Guest? Expand the Database >> Management >> Policy Management >> Conditions Right click on the Conditions, and click on “New Condition…”. First we will create a condition where we will validate if the user name is ‘guest’, and if it’s so, then we will further validate if it has DB access. Check the image for the necessary configuration for condition: Facet: User Expression: @Name = ‘guest’ Condition 2: Does the User have DBAccess? Expand the Database >> Management >> Policy Management >> Conditions Right click on Conditions and click on “New Condition…”. Now we will validate if the user has DB access. Check the image for necessary configuration for condition: Facet: User Expression: @hasDBAccess = False Condition 3: Exclude Databases Expand the Database >> Management >> Policy Management >> Conditions Write click on Conditions and click on “New Condition…” Now we will create condition where we will validate if database name is master, tempdb or msdb and if database name is any of them, we will not validate our first one condition with them. Check the image for necessary configuration for condition: Facet: Database Expression: @Name != ‘msdb’ AND @Name != ‘tempdb’ AND @Name != ‘master’ The next step will be creating a policy which will enforce these conditions. Creating a Policy Right click on Policies and click “New Policy…” Here, we justify what condition we want to validate against what the target is. Condition: Has User DBAccess Target Database: Every Database except (master, tempdb and MSDB) Target User: Every User in Target Database with name ‘guest’ Now we have options for two evaluation modes: 1) On Demand and 2) On Schedule We will select On Demand in this example; however, you can change the mode to On Schedule through the drop down menu, and select the interval of the evaluation of the policy. Evaluate the Policies We have selected OnDemand as our policy evaluation mode. We will now evaluate by means of executing Evaluate policy. Click on Evaluate and it will give the following result: The result demonstrates that one of the databases has a policy violation. Username guest is enabled in AdventureWorks database. You can disable the guest user by running the following code in AdventureWorks database. USE AdventureWorks; REVOKE CONNECT FROM guest; Once you run above query, you can already evaluate the policy again. Notice that the policy violation is fixed now. You can change the method of the evaluation policy to On Schedule and validate policy on interval. You can check the history of the policy and detect the violation. Quiz I have created three conditions to check if the guest user has database access or not. Now I want to ask you: Is it possible to do the same with 2 conditions? If yes, HOW? If no, WHY NOT? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Best Practices, CodeProject, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: Policy Management

    Read the article

  • Willy Rotstein on Analytics and Social Media in Retail

    - by sarah.taylor(at)oracle.com
    Recently I came across a presentation from Dan Zarrella on "The Science of Retweets. (http://www.slideshare.net/HubSpot/the-science-of-retweets-with-dan-zarrella). It is an insightful, fact-based analysis of how tweets propagate and what makes them successful. The analysis is of course very interesting for those of us interested Tweeting. However, what really caught my attention is how well it illustrates, form a very different angle, some of the issues I am discussing with retailers these days. In particular the opportunities that e-commerce and social media open to those retailers with the appetite and vision to tackle the associated analytical challenges. And these challenges are of course not straightforward.   In his presentation Dan introduces the concept of Observability, I haven't had the opportunity to discuss with Dan his specific definition for the term. However, in practical retail terms, I would say that it means that through social media (and other web channels such as search) we can analyze and track processes by measuring Indicators that were not measurable before. The focus is in identifying patterns across a large number of consumers rather than what a particular individual "Likes".   The potential impact for retailers is huge. It opens the opportunity to monitor changes in consumer preference  and plan the business accordingly. And you can do this almost "real time" rather than through infrequent surveys that provide a "rear view" picture of your consumer behaviour. For instance, you could envision identifying when a particular set of fashion styles are breaking out from the pack, and commit a re-buy. Or you could monitor when the preference for a specific mobile device has declined and hence markdowns should be considered; or how demand for a specific ready-made food typically flows across regions and manage the inventory accordingly. Search, blogging, website and store data may need to be considered in identifying these trends. The data volumes involved are huge (check Andrea Morgan's recent post on "Big Data" in retail) but so are the benefits. As Andrea says, for the first time we can start getting insight into "Why" the business is performing in a certain way rather than just reporting on what is happening. And it is not just about the data volumes. Tackling the challenge also calls for integrated planning systems that can bring data and insight into the context of the Decision Making process Buyers, Merchandisers and Supply Chain managers are following. I strongly believe that only when data and process come together you can move from the anecdotal to systematically improving business performance.   I would love to hear your opinions on these trends and where you think Retail is heading to exploit these topics - please email me: [email protected]

    Read the article

  • The 2010 Life Insurance Conference - Washington, DC

    - by [email protected]
    How ironic to be in Washington, DC on April 15 - TAX DAY! Fortunately, I avoided IRS offices and attended the much more enjoyable 2010 Life Insurance Conference, presented by LIMRA, LOMA SOA and ACLI. This year's conference offered a variety of tracks focused on the Life Industry including Distribution/Marketing Marketing, Administration, Actuarial/Product Development, Regulatory, Reinsurance and Strategic Management. President and CEO of the ACLI, Frank Keating, opened the event by moderating a session titled "Executive Viewpoint on new Opportunities." Guest speakers included Ted Mathas, President and CEO of NY Life, and John Walters, President and CEO of Hartford Life. Both speakers were insightful as they shared the challenges and opportunities each company faces and the key role life insurance companies play in our society and the global economy. There were several key themes that were reiterated in multiple sessions throughout the conference - the economy is on the rebound, optimism is growing, consumer spending is up and an uptick in employment is likely to follow. The threat of a double dip recession has seemed to passed. Good news for our industry, and welcomed by all in attendance. Of special interest to me, given my background, was some research shared by both The Nolan Group and Novarica in separate sessions. Both firms indicate that policy administration upgrades/replacement projects remain a top priority in 2010. Carriers continue to invest in modern technology. Modern ultra-configurable systems enable carriers to switch from a waterfall to an agile project methodology, which often entails a "culture change" within an organization. Other themes heard throughout the two-day event: Virtually all sessions focused on People, Process and Technology! Product innovation, agility and speed to market are as important as ever. Social Networks and Twitter are becoming more popular ways of communicating with both field and dispersed staff. Several sessions focused on the application, new business and underwriting process. Companies continue looking for ways to increase market agility, accelerate speed to market, address cost issues and improve service levels across the process. They recognize the need to ease the way to do business with both producers and consumers. Author and economic futurist Jeff Thredgold presented an entertaining, informative and humorous general session on Wednesday afternoon that focused on the US and global economies, financial markets and retirement outlook. Thredgold did not disappoint anyone with his message! The Thursday morning general session was keynoted by Therese Vaughan (CEO - NAIC) and Thomas Crawford (President of C2 Group). Both speakers gave a poignant view of the recent financial crisis and discussed "Putting the Pieces Back Together." Therese spoke of the recent financial turmoil and likely changes to regulations to the financial services sector. Tom's topics focused on economic recovery and the political environment in Washington, and how that impacts our industry. Next year's event will be April 11-13, 2011 in Las Vegas. Roger A.Soppe, CLU, LUTCF, is the Senior Director of Insurance Strategy, Oracle Insurance.

    Read the article

  • WebCenter Spaces 11g PS2 Template Customization

    - by javier.ductor(at)oracle.com
    Recently, we have been involved in a WebCenter Spaces customization project. Customer sent us a prototype website in HTML, and we had to transform Spaces to set the same look&feel as in the prototype.Protoype:First of all, we downloaded a Virtual Machine with WebCenter Spaces 11g PS2, same version as customer's. The next step was to download ExtendWebCenterSpaces application. This is a webcenter application that customizes several elements of WebCenter Spaces: templates, skins, landing page, etc. jDeveloper configuration is needed, we followed steps described in Extended Guide, this blog was helpful too. . After that, we deployed the application using WebLogic console, we created a new group space and assigned the ExtendedWebCenterSpaces template (portalCentricSiteTemplate) to it. The result was this:As you may see there is a big difference with the prototype, which meant a lot of work to do from our side.So we needed to create a new Spaces template with its skin.Using jDeveloper, we created a new template based on the default template. We removed all menus we did not need and inserted 'include'  tags for header, breadcrumb and footers. Each of these elements was defined in an isolated jspx file.In the beginning, we faced a problem: we had to add code from prototype (in plain HTML) to jspx templates (JSF language). We managed to workaround this issue using 'verbatim' tags with 'CDATA' surrounding HTML code in header, breadcrumb and footers.Once the template was modified, we added css styles to the default skin. So we had some styles from portalCentricSiteTemplate plus styles from customer's prototype.Then, we re-deployed the application, assigned this template to a new group space and checked the result. After testing, we usually found issues, so we had to do some modifications in the application, then it was necessary to re-deploy the application and restart Spaces server. Due to this fact, the testing process takes quite a bit of time.Added to the template and skin customization, we also customized the Landing Page using this application and Members task flow using another application: SpacesTaskflowCustomizationApplication. I will talk about this task flow customization in a future entry.After some issues and workarounds, eventually we managed to get this look&feel in Spaces:P.S. In this customization I was working with Francisco Vega and Jose Antonio Labrada, consultants from Oracle Malaga International Consulting Centre.

    Read the article

  • MySQL - Powering Online Media & Entertainment

    - by bertrand.matthelie(at)oracle.com
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Times"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p { margin: 0cm 0cm 0.0001pt; font-size: 10pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } If you're reading news, watching videos, or playing games online, you're probably relying on MySQL to do so.   Facebook, YouTube, BBC News, Zynga, thePlatform and many other leading Media & Entertainment organizations chose MySQL to power their online news, gaming, social networking, advertising or other applications.   During the past decade, the Media & Entertainment industry experienced a spectacular transformation.  The mobile Internet is becoming the dominant media platform, and the boundaries between the different types of media (i.e. Print, TV, Radio, Internet) have increasingly blurred as we've gradually come to perform more and more of our daily activities online.   To better understand how MySQL can help you win in the fast paced world of Media & Entertainment, check out our whitepaper "MySQL - Powering The Online Media & Entertainment Industry" in which we cover:   ·       The key trends shaping the evolution of the media & entertainment industry.   ·       Their implications, and the requirements they place on the infrastructure of information & entertainment services providers.   ·       How you can leverage Oracle's MySQL technologies to quickly and cost-effectively deliver new highly scalable and highly available online media & entertainment applications.   You're welcome to download it here.

    Read the article

  • Mexico leading in Business Transformation Strategies:

    - by [email protected]
    By John Burke Group Vice President Oracle Applications Business Unit     I recently completed a business tour in Mexico, and was surprised by both the economic vibrancy of the country and the thought leadership expressed by many of the customers I met.  An example of the economic vibrancy of the country: across the street from my hotel was the local Bentley dealership, Coach Store, Yves Saint Laurent and of course a Starbucks.  I only made it to Starbucks.  Both the Coach Store and YSL had a line of folks waiting to get in... As for thought leadership, there were several illustrations only on the first day. I had the opportunity to meet with a branch of the Mexican Federal Government. Their questions were not about clerical task automation, far from it! We discussed citizen on-line access to fees and services - for example looking up the duty on an international goods shipment, or tracking that my taxes have been received, or the status of my request for a certain service.  Eligibility, policies and status.  Having an integrated rules or policy automation system that would allow businesses and citizens to access accurate information and ensure the proper collection of fees and payment for 3rd party provided services.    Then in the afternoon, I met with the owner of a roofing company (note: most roofs in Mexico are flat and made of cement).  This CEO started discussing how he wanted to transform his business from a cement products company to a service company and market 5-10-15 year service contracts which would guarantee the structural integrity of the roof and of course that the roof would remain waterproof.  Although his products were guaranteed, they required an annual inspection and most home owners never schedule that inspection until it is too late and water damage has occurred.  These emergency calls reduce his margin and reduce customer satisfaction.  This lead to a discussion of business models in general and why long term differentiation can only come from service, not just for the music or news industries, but also for roofing companies!    I completely agreed with the transformational concepts described in both meetings and quickly understood why there is a Bentley dealership near my hotel.    

    Read the article

  • Virtual Box - How to open a .VDI Virtual Machine

    - by [email protected]
    TUESDAY, APRIL 27, 2010 How to open a .VDI Virtual MachineSometimes someone share with us one Virtual machine with extension .VDI, after that we can wonder how and what with?Well the answer is... It is a VirtualBox - Virtual Machine. If you have not downloaded it you can do this easily just follow this post.http://listeningoracle.blogspot.com/2010/04/que-es-virtualbox.htmlorhttp://oracleoforacle.wordpress.com/2010/04/14/ques-es-virtualbox/Ok, Now with VirtualBox Installed open it and proceed with the following:1. Open the Virtual File Manager.2. Click on Actions ? Add and select the .VDI fileClick "Ok"3. Now we can register the new Virtual Machine - Click New, and Click Next4. Write down a Name for the virtual Machine a proceed to select a Operating System and Version. (In this case it is a Linux (Oracle Enterprise Linux or RedHat)Click Next5. Select the memory amount base for the Virtual Machine(Minimal 1280 for our case) - Click Next6. Select the Disk 11GR2_OEL5_32GB.vdi it was added in the virtual media manager in the step 2.Dont forget let selected Boot hard Disk (Primary Master) . Given it is the only disk assigned to the virtual machine.Click Next7. Click Finish8. This step is important. Once you have click on the settings Button. 9. On General option click the advanced settings. Here you must change the default directory to save your Snapshots; my recommendation set it to the same directory where the .Vdi file is. Otherwise you can have the same Virtual Machine and its snapshots in different paths.10. Now Click on System, and proceed to assign the correct memory (If you did not before)Note: Enable "Enable IO APIC" if you are planning to assign more than one CPU to the Virtual Machine.Define the processors for the Virtual machine. If you processor is dual core choose 211. Select the video memory amount you want to assign to the Virtual Machine12. Associated more storage disk to the Virtual machine, if you have more VDI files.(Not our case)The disk must be selected as IDE Primary Master.13. Well you can verify the other options, but with these changes you will be able to start the VM.Note: Sometime the VM owner may share some instructions, if so follow his instructions.14. Finally Start the Virtual Machine (Click > Start)

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >