Search Results

Search found 26438 results on 1058 pages for 'calendar table'.

Page 635/1058 | < Previous Page | 631 632 633 634 635 636 637 638 639 640 641 642  | Next Page >

  • New Wordpress posts generate 404 error.

    - by Steve
    I had a working installation of WordPress, and I recently encountered an issue where when I tried to login to the back-end, the browser would redirect to the login URL of the previous domain WordPress was installed on. I fixed this by reinstalling WordPress, and I can now login to the backend, but any new posts I make, or old posts I have generate 404 errors. Additionally, if I try to navigate to any category page, I again receive a 404 error. I have looked at the wp_posts table of my database, and the GUID field each contains the correct domain name and URL structure. What should I be checking here? Site in question.

    Read the article

  • Converting Creole to HTML, PDF, DOCX, ..

    - by Marko Apfel
    Challenge We documented a project on Github with the Wiki there. For most articles we used Creole as markup language. Now we have to deliver a lot of the content to our client in an usual format like PDF or DOCX. So we need a automatism to extract all relevant content, merge it together and convert the stuff to a new format. Problem One of the most popular toolsets to convert between several formats is Pandoc. But unfortunally Pandoc does not support Creole (see the converting matrix). Approach So we need an intermediate step: Converting from Creole to a supported Pandoc format. Creolo/c is a Creole to Html converter and does exactly what we need. After converting our Creole content to Html we could use Pandoc for all the subsequent tasks. Solution Getting the Creole stuff First at all we need the Creole content on our locale machines. This is easy. Because the Github Wiki themselves is a Git repository we could clone it to our machine. In the working copy we see now all the files and the suffix gives us the hint for the markup language. Converting and Merging Creole content to Html Because we would like all content from several Creole files in one HTML file, we have to convert and merge all the input files to one output file. Creole/c has an option (-b) to generate only the Html-stuff below a Html <Body>-tag. And this is hook for us to start. We have to create manually the additional preluding Html-tags (<html>, <head>, ..), then we merge all needed Creole content to our output file and last we add the closing tags. This could be done straightforward with a little bit old DOS magic: REM === Generate the intro tags === ECHO ^<html^> > %TMP%\output.html ECHO ^<head^> >> %TMP%\output.html ECHO ^<meta name="generator" content="creole/c"^> >> %TMP%\output.html ECHO ^</head^> >> %TMP%\output.html ECHO ^<body^> >> %TMP%\output.html REM === Mix in all interesting Creole stuff with creole/c === .\Creole-C\bin\creole.exe -b .\..\datamodel+overview.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+domain+CvdCaptureMode.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+domain+CvdDamageReducingActivity.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+lookup+IncidentDamageCodes.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+table+Attachments.creole >> %TMP%\output.html .\Creole-C\bin\creole.exe -b .\..\datamodel+table+TrafficLights.creole >> %TMP%\output.html REM === Generate the outro tags === ECHO ^</body^> >> %TMP%\output.html ECHO ^</html^> >> %TMP%\output.html REM === Convert the Html file to Docx with Pandoc === .\Pandoc\bin\pandoc.exe -o .\Database-Schema.docx %TMP%\output.html Some explanation for this The first ECHO call creates the file. Therefore the beginning <html> tag is send via > to a temporary working file. All following calls add content to the existing file via >>. The tag-characters < and > must be escaped. This is done by the caret sign (^). We use a file in the default temporary folder (%TMP%) to avoid writing in our current folders. (better for continuous integration) Both toolsets (Creole/c and Pandoc) are copied to a versioned tools folder in the Wiki. This is committable and no problem after pushing – Github does not do anything with it. In this folder is also the batch (Export-Docx.bat) for all the steps. Pandoc recognizes the conversion by the suffixes of the file names. So it is enough to specify only the input and output files.

    Read the article

  • TDE Tablespace Encryption 11.2.0.1 Certified with EBS 11i

    - by Steven Chan
    Oracle Advanced Security is an optional licenced Oracle 11g Database add-on.  Oracle Advanced Security Transparent Data Encryption (TDE) offers two different features:  column encryption and tablespace encryption.  TDE Tablespace Encryption 11.2.0.1 is now certified with Oracle E-Business Suite Release 11i. What is Transparent Data Encryption (TDE) ? Oracle Advanced Security Transparent Data Encryption (TDE) allows you to protect data at rest. TDE helps address privacy and PCI requirements by encrypting personally identifiable information (PII) such as Social Security numbers and credit card numbers. TDE is completely transparent to existing applications with no triggers, views or other application changes required. Data is transparently encrypted when written to disk and transparently decrypted after an application user has successfully authenticated and passed all authorization checks. Authorization checks include verifying the user has the necessary select and update privileges on the application table and checking Database Vault, Label Security and Virtual Private Database enforcement policies.

    Read the article

  • database api commands

    - by Rahul Mehta
    As I am developing database api for a project. I am developing commands for getting data from database. e.g. i have one gib table so command for that is getgib name alias limit fields if user pass the name e.g. getgib rahul than it will return all the gib data whose name is like rahul. if alias is given than it will return the all the gib owned by the user whose alias(userid) given . So i want to design the commands. limit : is to limit the record in query, fields : is the extra fields i want to add in the select query . so as now commands are set but now Question 1 : i want the gibs by the gibid , so how to make this or any suggestion to improve my command is welcome. Question 2 : if user don't want to specify the name , and he want only the gibs by providing alias then at this what separator at the place of name i should used.

    Read the article

  • Best in-memory cache of DB objects for Silverlight [closed]

    - by Jon
    Hi, I'd like to set up a cache of database objects (i.e. rows in a table) in memory in silverlight, which I'll do using WCF and linq-to-sql. Once I have the objects in memory, I'm planning on using MSMQ to receive new objects whenever they have been modified. It's a somewhat complex approach but the goal is to reduce trips to the database and allow instant data communication between Silverlight applications that are connected to the MSMQ. My Silverlight applications are meant to be long-running and the amount of data to be cached will not be large. I'm planning on saving the in-memory cache using local storage. Anyway, in order to process the updated objects that come in, I'd like to know if the user has changed the existing object. Could I use some event relating to data-binding to set a flag indicating that the object has changes? Maybe there's a better way to do the cache entirely? Thanks!

    Read the article

  • nginx PPA does not work?

    - by Peter Smit
    I want to use the newest version of nginx, so I wanted to add the nginx/stable ppa sudo add-apt-repository ppa:nginx/stable sudo apt-get update However, the upgrade command says that there are no upgrades available and nginx is still the old version. Did I do something wrong? I use Ubuntu server 10.04 Lucid add-apt-repository output: $ sudo apt-add-repository ppa:nginx/stable Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv 8B3981E7A6852F782CC4951600A6F0A3C300EE8C gpg: requesting key C300EE8C from hkp server keyserver.ubuntu.com gpg: key C300EE8C: "Launchpad Stable" not changed gpg: Total number processed: 1 gpg: unchanged: 1 apt-cache policy ouput: $ sudo apt-cache policy nginx nginx: Installed: 0.7.65-1ubuntu2 Candidate: 0.7.65-1ubuntu2 Version table: *** 0.7.65-1ubuntu2 0 500 http://eu-west-1.ec2.archive.ubuntu.com/ubuntu/ lucid/universe Packages 100 /var/lib/dpkg/status

    Read the article

  • SQL SERVER – Video – Performance Improvement in Columnstore Index

    - by pinaldave
    I earlier wrote an article about SQL SERVER – Fundamentals of Columnstore Index and it got very well accepted in community. However, one of the suggestion I keep on receiving for that article is that many of the reader wanted to see columnstore index in the action but they were not able to do that. Some of the readers did not install SQL Server 2012 or some did not have good machine to recreate the big table involved in the demo. For the same reason, I have created small video for that. I have written two more article on columstore index. Please read them as followup to the video: SQL SERVER – How to Ignore Columnstore Index Usage in Query SQL SERVER – Updating Data in A Columnstore Index Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Generalize, or Fix The Problem?

    - by Droogans
    Which of these two programmers is "better", from a managerial standpoint? The first programmer is Albert. You tell Al to make a system that will pass you the salt at the dinner table. He does it in less than a day. It works fine. The second programmer is Ben. Ben is told to make a program to pass the salt, and after two days, he's still working on it. It will save time in the long run...if you need pepper, ketchup, etc. There isn't any clear indication that there will be a need for this, but it's not improbable. Who's the better programmer to have working under you, as a manager?

    Read the article

  • DBCC CHECKDB (BatmanDb, REPAIR_ALLOW_DATA_LOSS) &ndash; Are you Feeling Lucky?

    - by David Totzke
    I’m currently working for a client on a PowerBuilder to WPF migration.  It’s one of those “I could tell you, but I’d have to kill you” kind of clients and the quick-lime pits are currently occupied by the EMC tech…but I’ve said too much already. At approximately 3 or 4 pm that day users of the Batman[1] application here in Gotham[1] started to experience problems accessing the application.  Batman[2] is a document management system here that also integrates with the ERP system.  Very little goes on here that doesn’t involve Batman in some way.  The errors being received seemed to point to network issues (TCP protocol error, connection forcibly closed by the remote host etc…) but the real issue was much more insidious. Connecting to the database via SSMS and performing selects on certain tables underlying the application areas that were having problems started to reveal the issue.  You couldn’t do a SELECT * FROM MyTable without it bombing and giving the same error noted above.  A run of DBCC CHECKDB revealed 14 tables with corruption.  One of the tables with issues was the Document table.  Pretty central to a “document management” system.  Information was obtained from IT that a single drive in the SAN went bad in the night.  A new drive was in place and was working fine.  The partition that held the Batman database is configured for RAID Level 5 so a single drive failure shouldn’t have caused any trouble and yet, the database is corrupted.  They do hourly incremental backups here so the first thing done was to try a restore.  A restore of the most recent backup failed so they worked backwards until they hit a good point.  This successful restore was for a backup at 3AM – a full day behind.  This time also roughly corresponds with the time the SAN started to report the drive failure.  The plot thickens… I got my hands on the output from DBCC CHECKDB and noticed a pattern.  What’s sad is that nobody that should have noticed the pattern in the DBCC output did notice.  There was a rush to do things to try and recover the data before anybody really understood what was wrong with it in the first place.  Cooler heads must prevail in these circumstances and some investigation should be done and a plan of action laid out or you could end up making things worse[3].  DBCC CHECKDB also told us that: repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB Yikes.  That means that the database is so messed up that you’re definitely going to lose some stuff when you repair it to get it back to a consistent state.  All the more reason to do a little more investigation into the problem.  Rescuing this database is preferable to having to export all of the data possible from this database into a new one.  This is a fifteen year old application with about seven hundred tables.  There are TRIGGERS everywhere not to mention the referential integrity constraints to deal with.  Only fourteen of the tables have an issue.  We have a good backup that is missing the last 24 hours of business which means we could have a “do-over” of yesterday but that’s not a very palatable option either. All of the affected tables had TEXT columns and all of the errors were about LOB data types and orphaned off-row data which basically means TEXT, IMAGE or NTEXT columns.  If we did a SELECT on an affected table and excluded those columns, we got all of the rows.  We exported that data into a separate database.  Things are looking up.  Working on a copy of the production database we then ran DBCC CHECKDB with REPAIR_ALLOW_DATA_LOSS and that “fixed” everything up.   The allow data loss option will delete the bad rows.  This isn’t too horrible as we have all of those rows minus the text fields from out earlier export.  Now I could LEFT JOIN to the exported data to find the missing rows and INSERT them minus the TEXT column data. We had the restored data from the good 3AM backup that we could now JOIN to and, with fingers crossed, recover the missing TEXT column information.  We got lucky in that all of the affected rows were old and in the end we didn’t lose anything.  :O  All of the row counts along the way worked out and it looks like we dodged a major bullet here. We’ve heard back from EMC and it turns out the SAN firmware that they were running here is apparently buggy.  This thing is only a couple of months old.  Grrr…. They dispatched a technician that night to come and update it .  That explains why RAID didn’t save us. All-in-all this could have been a lot worse.  Given the root cause here, they basically won the lottery in not losing anything. Here are a few links to some helpful posts on the SQL Server Engine blog.  I love the title of the first one: Which part of 'REPAIR_ALLOW_DATA_LOSS' isn't clear? CHECKDB (Part 8): Can repair fix everything? (in fact, read the whole series) Ta da! Emergency mode repair (we didn’t have to resort to this one thank goodness)   Dave Just because I can…   [1] Names have been changed to protect the guilty. [2] I'm Batman. [3] And if I'm the coolest head in the room, you've got even bigger problems...

    Read the article

  • Visual Studio Talk Show #117 is now online - Microsoft Surface (French)

    http://www.visualstudiotalkshow.com Simon Ferquel et Thomas Lebrun: Microsoft Surface Nous discutons avec Simon Ferquel et Thomas Lebrun du systme informatique "Surface". Surface se prsente l'utilisateur comme une table dont le dessus est constitu d'une surface dot dun affichage tactile "multitouch" qui permet de manipuler un contenu informatique l'aide d'un cran tactile. Thomas Lebrun est architecte et dveloppeur chez Access IT Paris. Il est particulirement intress...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Nicolas Sarkozy souhaite une "Hadopi 3 plus adaptée", car la mouture actuelle n'est "pas parfaite"

    Nicolas Sarkozy souhaite une "Hadopi 3 plus adaptée", car la mouture actuelle n'est "pas parfaite" Mise à jour du 16.12.2010 par Katleen Ce midi, une rencontre informelle entre le Président de la République et des acteurs de l'Internetfrançais était organisée à l'Elysée, à l'occasion du déjeuner. Autour de la table, et de Nicolas Sarkozy, étaient réunis : Eric Dupin (Presse Citron), Maître Eolas, Versac, Jacques-Antoine GRANJON (fondateur de vente-privée.com), Daniel Marhely (fondateur de deezer.com) et Xavier Niel (fondateur d'Illiad). Déjà, pour améliorer la gestion de l'Internet en France, le chef de l'Etat a déclaré vouloir créer un Conseil Numérique «plus formé...

    Read the article

  • PHP - Centos OpenSSL error

    - by mabbs
    i'm currently having a problem with OpenSSL on my Centos 6.5 Server. it ran perfectly fine until sunday. and i checked the error_log and i saw this error in the log PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/openssl.so' - /usr/lib64/php/modules/openssl.so: cannot open shared object file: No such file or directory in Unknown on line 0 i tried phpinfo(); and i found that openssl is enabled i tried php -m it returned [PHP Modules] bz2 calendar Core ctype curl date dom ereg exif fileinfo filter ftp gd gettext gmp hash iconv interbase json libxml mbstring mcrypt memcache mysql mysqli openssl pcntl pcre PDO PDO_Firebird pdo_mysql pdo_sqlite Phar pspell readline Reflection session shmop SimpleXML snmp sockets SPL sqlite3 standard tokenizer wddx xml xmlreader xmlrpc xmlwriter xsl zip zlib UPDATE this is what i got from rpm -qa | grep php just like what Mike Suggested php-php-gettext-1.0.11-3.el6.noarch php-mcrypt-5.3.3-3.el6.x86_64 php-interbase-5.3.3-3.el6.x86_64 php-pdo-5.3.3-27.el6_5.1.x86_64 php-5.3.3-27.el6_5.1.x86_64 php-mysql-5.3.3-27.el6_5.1.x86_64 php-snmp-5.3.3-27.el6_5.1.x86_64 php-gd-5.3.3-27.el6_5.1.x86_64 php-xml-5.3.3-27.el6_5.1.x86_64 php-pear-1.9.4-4.el6.noarch php-pecl-memcache-3.0.5-4.el6.x86_64 phpMyAdmin-3.5.8.2-1.el6.noarch php-common-5.3.3-27.el6_5.1.x86_64 php-cli-5.3.3-27.el6_5.1.x86_64 php-devel-5.3.3-27.el6_5.1.x86_64 php-mbstring-5.3.3-27.el6_5.1.x86_64 php-xmlrpc-5.3.3-27.el6_5.1.x86_64 php-pspell-5.3.3-27.el6_5.1.x86_64

    Read the article

  • Online Introduction to Relational Databases (and not only) with Stanford University!

    - by Luca Zavarella
    How many of you know exactly the definition of "relational database"? What exactly the adjective "relational" refers to? Many of you allow themselves to be deceived, thinking this adjective is related to foreign key constraints between tables. Instead this adjective lurks in a world based on set theory, relational algebra and the concept of relationship intended as a table.Well, for those who want to deep the fundamentals of relational model, relational algebra, XML, OLAP and emerging "NoSQL" systems, Stanford University School of Engineering offers a public and free online introductory course to databases. This is the related web page: http://www.db-class.com/ The course will last 2 months, after which there will be a final exam. Passing the final exam will entitle the participants to receive a statement of accomplishment. A syllabus and more information is available here. Happy eLearning to you!

    Read the article

  • "Unable to initialize module" fileinfo php-pecl-Fileinfo.x86_64

    - by Myers Network
    I have a brand new server server that I am trying to get setup up. This is a 64 bit machine that I can not install "fileinfo" or "memcache". I have uninstalled these and reinstalled them using yum and pecl with no luck. Yum install fine "no error" but then get error when running php. pecl from what I can tell is only installing 32bit. Does not put anything in the lib64 directory. Here is my output from php -v: PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP 5.2.14 (cli) (built: Aug 12 2010 16:03:48) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2010 Zend Technologies Here is some other system info incase you need it uname: Linux server.actham.us 2.6.18-194.26.1.el5 #1 SMP Tue Nov 9 12:54:20 EST 2010 x86_64 x86_64 x86_64 GNU/Linux php -m: PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 [PHP Modules] bz2 calendar ctype curl date dbase dom exif filter ftp gd gettext gmp hash iconv imap json ldap libxml mbstring mcrypt mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_sqlite readline Reflection session shmop SimpleXML sockets SPL standard tokenizer wddx xml xmlreader xmlrpc xmlwriter xsl zip zlib [Zend Modules] Any help would be greatly appreciated, thanks....

    Read the article

  • Rails noob - How to work on data stored in models

    - by Raghav Kanwal
    I'm a beginner to Ruby and Rails, and I have made a couple applications like a Microposts clone and a Todo-List for starters, but I'm starting work on another project. I've got 2 models - user and tracker, you log in via the username which is authenticated and you can log down data which is stored in the tracker table. The tracker has a column named "Calories" and I would like Rails to sum all of the values entered if they are on the same date, and output the result which is subtracted from, say 3000 in a new statement after the display of the model. I know what I'm talking about is just ruby code, im just not sure how to incorporate it. :( Could someone please guide me through this? And also link me to some guides/tutorials which teach working on data from models? Thank you :)

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Improving performance of fuzzy string matching against a dictionary [closed]

    - by Nathan Harmston
    Hi, So I'm currently working for with using SecondString for fuzzy string matching, where I have a large dictionary to compare to (with each entry in the dictionary has an associated non-unique identifier). I am currently using a hashMap to store this dictionary. When I want to do fuzzy string matching, I first check to see if the string is in the hashMap and then I iterate through all of the other potential keys, calculating the string similarity and storing the k,v pair/s with the highest similarity. Depending on which dictionary I am using this can take a long time ( 12330 - 1800035 entries ). Is there any way to speed this up or make it faster? I am currently writing a memoization function/table as a way of speeding this up, but can anyone else think of a better way to improve the speed of this? Maybe a different structure or something else I'm missing. Many thanks in advance, Nathan

    Read the article

  • Creating Rich View Components in ASP.NET MVC

    - by kazimanzurrashid
    One of the nice thing of our Telerik Extensions for ASP.NET MVC is, it gives you an excellent extensible platform to create rich view components. In this post, I will show you a tiny but very powerful ListView Component. Those who are familiar with the Webforms ListView component already knows that it has the support to define different parts of the component, we will have the same kind of support in our view component. Before showing you the markup, let me show you the screenshots first, lets say you want to show the customers of Northwind database as a pagable business card style (Yes the example is inspired from our RadControls Suite) And here is the markup of the above view component. <h2>Customers</h2> <% Html.Telerik() .ListView(Model) .Name("customers") .PrefixUrlParameters(false) .BeginLayout(pager => {%> <table border="0" cellpadding="3" cellspacing="1"> <tfoot> <tr> <td colspan="3" class="t-footer"> <% pager.Render(); %> </td> </tr> </tfoot> <tbody> <tr> <%}) .BeginGroup(() => {%> <td> <%}) .Item(item => {%> <fieldset style="border:1px solid #e0e0e0"> <legend><strong>Company Name</strong>:<%= Html.Encode(item.DataItem.CompanyName) %></legend> <div> <div style="float:left;width:120px"> <img alt="<%= item.DataItem.CustomerID %>" src="<%= Url.Content("~/Content/Images/Customers/" + item.DataItem.CustomerID + ".jpg") %>"/> </div> <div style="float:right"> <ul style="list-style:none none;padding:10px;margin:0"> <li> <strong>Contact Name:</strong> <%= Html.Encode(item.DataItem.ContactName) %> </li> <li> <strong>Title:</strong> <%= Html.Encode(item.DataItem.ContactTitle) %> </li> <li> <strong>City:</strong> <%= Html.Encode(item.DataItem.City)%> </li> <li> <strong>Country:</strong> <%= Html.Encode(item.DataItem.Country)%> </li> <li> <strong>Phone:</strong> <%= Html.Encode(item.DataItem.Phone)%> </li> <li> <div style="float:right"> <%= Html.ActionLink("Edit", "Edit", new { id = item.DataItem.CustomerID }) %> <%= Html.ActionLink("Delete", "Delete", new { id = item.DataItem.CustomerID })%> </div> </li> </ul> </div> </div> </fieldset> <%}) .EmptyItem(() =>{%> <fieldset style="border:1px solid #e0e0e0"> <legend>Empty</legend> </fieldset> <%}) .EndGroup(() => {%> </td> <%}) .EndLayout(pager => {%> </tr> </tbody> </table> <%}) .GroupItemCount(3) .PageSize(6) .Pager<NumericPager>(pager => pager.ShowFirstLast()) .Render(); %> As you can see that you have the complete control on the final angel brackets and like the webform’s version you also can define the templates. You can also use this component to show Master/Detail data, for example the customers and its order like the following: I am attaching the complete source code along with the above examples for your review, what do you think, how about creating some component with our extensions? Download: MvcListView.zip

    Read the article

  • Nimbus Tweaking Help Needed

    - by Geertjan
    I was reading this new article on Synthetica and NetBeans RCP this morning, when I remembered this screenshot from Henry Arousell from Sweden: Here, Nimbus is heavily being used, highlighting 6 areas where Henry would really benefit from any help regarding how the foreground properties should be set: The color of the main menu (and its subsequent unfolded menu options) The TopComponent tab colors (as you can see from the screenshot, they've managed to change the foreground colors of the ones in the editor mode by setting the Nimbus property "TextText"). They cannot manipulate TopComponents in other modes, though. Table header foreground colors The foreground color of any provided composite component, like a JFileChooser or the ICEPdf viewer or any other panel with label components. The progress bar message color. The status bar message color, A Nimbus expert is needed to help here, though it seems to me that some of the solutions have already been identified, or are similar, in the article pointed out above.

    Read the article

  • Find odd and even rows using $.inArray() function when using jQuery Templates

    - by hajan
    In the past period I made series of blogs on ‘jQuery Templates in ASP.NET’ topic. In one of these blogs dealing with jQuery Templates supported tags, I’ve got a question how to create alternating row background. When rendering the template, there is no direct access to the item index. One way is if there is an incremental index in the JSON string, we can use it to solve this. If there is not, then one of the ways to do this is by using the jQuery’s $.inArray() function. - $.inArray(value, array) – similar to JavaScript indexOf() Here is an complete example how to use this in context of jQuery Templates: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server">     <style type="text/css">         #myList { cursor:pointer; }                  .speakerOdd { background-color:Gray; color:White;}         .speaker { background-color:#443344; color:White;}                  .speaker:hover { background-color:White; color:Black;}         .speakerOdd:hover { background-color:White; color:Black;}     </style>     <title>jQuery ASP.NET</title>     <script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.4.4.min.js" type="text/javascript"></script>     <script src="http://ajax.aspnetcdn.com/ajax/jquery.templates/beta1/jquery.tmpl.min.js" type="text/javascript"></script>     <script language="javascript" type="text/javascript">         var speakers = [             { Name: "Hajan1" },             { Name: "Hajan2" },             { Name: "Hajan3" },             { Name: "Hajan4" },             { Name: "Hajan5" }         ];         $(function () {             $("#myTemplate").tmpl(speakers).appendTo("#myList");         });         function oddOrEven() {             return ($.inArray(this.data, speakers) % 2) ? "speaker" : "speakerOdd";         }     </script>     <script id="myTemplate" type="text/x-jquery-tmpl">         <tr class="${oddOrEven()}">             <td> ${Name}</td>         </tr>     </script> </head> <body>     <table id="myList"></table> </body> </html> So, I have defined stylesheet classes speakerOdd and speaker as well as corresponding :hover styles. Then, you have speakers JSON string containing five items. And what is most important in our case is the oddOrEven function where $.inArray(value, data) is implemented. function oddOrEven() {     return ($.inArray(this.data, speakers) % 2) ? "speaker" : "speakerOdd"; } Remark: The $.inArray() method is similar to JavaScript's native .indexOf() method in that it returns -1 when it doesn't find a match. If the first element within the array matches value, $.inArray() returns 0. From http://api.jquery.com/jQuery.inArray/ So, now we can call oddOrEven function from inside our jQuery Template in the following way: <script id="myTemplate" type="text/x-jquery-tmpl">     <tr class="${oddOrEven()}">         <td> ${Name}</td>     </tr> </script> And the result is I hope you like it. Regards, Hajan

    Read the article

  • 'Getting Started with Oracle Data Integrator 11g: A Hands-On Tutorial' book is now available

    - by Julien Testut
    We are pleased to announce the availability of the first book on Oracle Data Integrator published by Packt Publishing:  Getting Started with Oracle Data Integrator 11g – A Hands-On Tutorial Authors: Peter C. Boyd-Bowman, Christophe Dupupet, Denis Gray, David Hecksel,  Julien Testut, Bernard Wheeler. Congratulations to everyone who contributed to this book! You can get more information about 'Getting Started with Oracle Data Integrator 11g – A Hands-On Tutorial' including the table of contents and a sample chapter at http://www.packtpub.com/oracle-data-integrator-11g-getting-started/book. The book is available on Amazon.com, Amazon.co.uk, Barnes & Noble and Safari Books Online.

    Read the article

  • Webcast Replay Available: Scrambling Sensitive Data in E-Business Suite Release 12 Cloned Environments

    - by BillSawyer
    I am pleased to release the replay and presentation for ATG Live Webcast Scrambling Sensitive Data in EBS 12 Cloned Environments (Presentation) Eric Bing, Senior Director, Jagan Athreya, Enterprise Manager Product Management, and Elke Phelps, Senior Principal Product Manager, discussed the Oracle E-Business Suite Template for Data Masking Pack, and how it can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers. (July 2012) Finding other recorded ATG webcastsThe catalog of ATG Live Webcast replays, presentations, and all ATG training materials is available in this blog's Webcasts and Training section.

    Read the article

  • Advanced Search Stored procedure

    - by Ray Eatmon
    So I am working on an MVC ASP.NET web application which centers around lots of data and data manipulation. PROBLEM OVERVIEW: We have an advanced search with 25 different filter criteria. I am using a stored procedure for this search. The stored procedure takes in parameters, filter for specific objects, and calculates return data from those objects. It queries large tables 14 millions records on some table, filtering and temp tables helped alleviate some of the bottle necks for those queries. ISSUE: The stored procedure used to take 1 min to run, which creates a timeout returning 0 results to the browser. I rewrote the procedure and got it down to 21 secs so the timeout does not occur. This ONLY occurs this slow the FIRST time the search is run, after that it takes like 5 secs. I am wondering should I take a different approach to this problem, should I worry about this type of performance issue if it does not timeout?

    Read the article

  • ODI 12c - Loading Files into Oracle, community post from ToadWorld

    - by David Allan
    There's a complete soup to nuts post from Deepak Vohra on the Oracle community pages of ToadWorld on loading a fixed length file into the Oracle database. This post is interesting from a few fronts; firstly this is the out of the box experience, no specialized KMs so just basic integration from getting the software installed to running a mapping. Also it demonstrates fixed length file integration including how to use the ODI UI to define the fields and pertinent properties.  Check the blog post out below.... http://www.toadworld.com/platforms/oracle/w/wiki/10935.loading-text-file-data-into-oracle-database-12c-with-oracle-data-integrator-12c.aspx Hopefully you also find this useful, many thanks to Deepak for sharing his experiences. You could take this example further and illustrate how to load into Oracle using the LKM File to Oracle via External table knowledge module which will perform much better and also leverage such things as using wildcards for loading many files into the 12c database.

    Read the article

< Previous Page | 631 632 633 634 635 636 637 638 639 640 641 642  | Next Page >