Search Results

Search found 60391 results on 2416 pages for 'data generation'.

Page 681/2416 | < Previous Page | 677 678 679 680 681 682 683 684 685 686 687 688  | Next Page >

  • Announcement: Oracle Solaris 11.1

    - by uwes
    On October 3rd at Oracle OpenWorld John Fowler announced Oracle Solaris 11.1 . This first update to Oracle Solaris 11 increases uptime for the Oracle Database: 8x faster database shutdown and start-up Helps DBAs find and resolve I/O issues increasing performance 1.2x Oracle RAC throughput Oracle Solaris 11.1 drives up network utilization by extending network virtualization to include Edge Virtual Bridging and Data Center Bridging that help manage network bandwidth for high priority services and applications. Learn more and share these valuable tools with your customers to encourage them to deploy Oracle Solaris 11.1 Read Press Release here Oracle Solaris 11.1 Data Sheet (PDF) What's New in Solaris 11.1 Oracle Solaris 11.1 FAQs Join the the online web event Oracle Solaris 11 Innovations for your Data Center on November 7, 2012

    Read the article

  • Interfaces Reference Model available

    - by ACShorten
    With the implementation of an Oracle Utilities Application Framework based products, you can implement other Oracle technologies to augment your solution. There is a whitepaper available now to outline all the technology integrations possible with various versions of the Oracle Utilities Application Framework. The whitepaper outlines the possible integrations and implementations of other Oracle technologies to address customer requirements in association with Oracle Utilities Application Framework based products. The whitepaper covers a vast range of products including: Oracle Fusion Middleware Oracle SOA Suite Oracle Identity Management Suite Oracle ExaData and Oracle ExaLogic Oracle VM Data Options including Real Application Clustering, Real Application Testing, Data Guard/Active Data Guard, Compression, Partitioning, Database Vault, Audit Vault etc.. The whitepaper contains a summary of the integration solution possibilities, links to further information including product specific interfaces. The whitepaper is available from My Oracle Support at KB Id: 1506855.1

    Read the article

  • Which tool to use for "home banking"?

    - by Huygens
    I would like to manage my bank accounts in a secure manner on Ubuntu. I saw several applications in the Software Centre, but I don't know which one to choose. I don't need fancy features like stock options. I just have regular accounts which I want to follow, I don't want complicated stuff. As bank data are quite sensitive, I would highly prefer an application that does encryption of the data. Though, if you have a really cool app but it does not have this feature, as long as it offers to store the data in one dedicated place, I could do with encrypting that place. So what tool do you use that could fit my needs?

    Read the article

  • App Fabric Service Bus and Access Control Pricing

    - by kaleidoscope
    The Service Bus costs $3.99 per Connection-month on a consumption basis for individually provisioned connections. Data transfers charges would also apply. Or, if you are able to forecast your needs ahead of time, you can purchase “Packs” of Connections. For example: $9.95 for a pack of 5 Connections, $49.75 for a pack of 25, $199.00 for a pack of 100, or $995 for a pack of 500, plus data transfer charges. Connection Packs represent an effective rate of $1.99 per Connection-month. Access Control will be priced at $1.99 per 100,000 Transactions, which includes token requests and management operations, plus associated data transfer. Typically, Service Bus developers depend on Access Control to secure their Connections. More Information: http://azurefeeds.com/post/865/Announcing_Windows_Azure_platform_commercial_offer_availability_and_updated_AppFabric_pricing.aspx   Amit, S

    Read the article

  • Is there any reason to use "container" classes?

    - by Michael
    I realize the term "container" is misleading in this context - if anyone can think of a better term please edit it in. In legacy code I occasionally see classes that are nothing but wrappers for data. something like: class Bottle { int height; int diameter; Cap capType; getters/setters, maybe a constructor } My understanding of OO is that classes are structures for data and the methods of operating on that data. This seems to preclude objects of this type. To me they are nothing more than structs and kind of defeat the purpose of OO. I don't think it's necessarily evil, though it may be a code smell. Is there a case where such objects would be necessary? If this is used often, does it make the design suspect?

    Read the article

  • How should I make searching a relational database more efficient?

    - by Travis J
    This is in the scope of a web application. I have a database which has a few nested relations. There is a feature which depicts the history of a large chain of relations. It is essentially a data analysis feature. The issue is that in order to search, a large object graph must be loaded - the loading time for this object graph is not quick enough to be viable. The problem is that without loading the whole graph it makes searching from a single string nearly impossible. In order to search, explicit fields must be specified and the search data supplied. Is there a design pattern for exposing the data in a way which facilitates a single string search instead of having to explicitly define parameters?

    Read the article

  • Writing use cases for XML mapping scenarios between two different systems

    - by deepak_prn
    I am having some trouble writing use cases for XML mapping after a certain trigger invoked by the system. For example, one of the scenarios goes: the store cashier sells an item, the transaction data is sent to Data management system. Now, I am writing a functional design for the scenario which deals with mapping XML fields between our system and the data management system. Question : I was wondering if some one had to deal with writing use cases or extension use cases for mapping XML fields between two systems? (There is no XSLT involved) and if you used a table to represent the fields mapping (example is below) or any other visualization tool which does not break the bank ? I searched many questions on SO and here but nothing came close to my requirement.

    Read the article

  • Infiniband: a highperformance network fabric - Part I

    - by Karoly Vegh
    Introduction:At the OpenWorld this year I managed to chat with interesting people again - one of them answering Infiniband deepdive questions with ease by coffee turned out to be one of Oracle's IB engineers, Ted Kim, who actually actively participates in the Infiniband Trade Association and integrates Oracle solutions with this highspeed network. This is why I love attending OOW. He granted me an hour of his time to talk about IB. This post is mostly based on that tech interview.Start of the actual post: Traditionally datatransfer between servers and storage elements happens in networks with up to 10 gigabit/seconds or in SANs with up to 8 gbps fiberchannel connections. Happens. Well, data rather trickles through.But nowadays data amounts grow well over the TeraByte order of magnitude, and multisocket/multicore/multithread Servers hunger data that these transfer technologies just can't deliver fast enough, causing all CPUs of this world do one thing at the same speed - waiting for data. And once again, I/O is the bottleneck in computing. FC and Ethernet can't keep up. We have half-TB SSDs, dozens of TB RAM to store data to be modified in, but can't transfer it. Can't backup fast enough, can't replicate fast enough, can't synchronize fast enough, can't load fast enough. The bad news is, everyone is used to this, like back in the '80s everyone was used to start compile jobs and go for a coffee. Or on vacation. The good news is, there's an alternative. Not so-called "bleeding-edge" 8gbps, but (as of now) 56. Not layers of overhead, but low latency. And it is available now. It has been for a while, actually. Welcome to the world of Infiniband. Short history:Infiniband was born as a result of joint efforts of HPAQ, IBM, Intel, Sun and Microsoft. They planned to implement a next-generation I/O fabric, in the 90s. In the 2000s Infiniband (from now on: IB) was quite popular in the high-performance computing field, powering most of the top500 supercomputers. Then in the middle of the decade, Oracle realized its potential and used it as an interconnect backbone for the first Database Machine, the first Exadata. Since then, IB has been booming, Oracle utilizes and supports it in a large set of its HW products, it is the backbone of the famous Engineered Systems: Exadata, SPARC SuperCluster, Exalogic, OVCA and even the new DB backup/recovery box. You can also use it to make servers talk highspeed IP to eachother, or to a ZFS Storage Appliance. Following Oracle's lead, even IBM has jumped the wagon, and leverages IB in its PureFlex systems, their first InfiniBand Machines.IB Structural Overview: If you want to use IB in your servers, the first thing you will need is PCI cards, in IB terms Host Channel Adapters, or HCAs. Just like NICs for Ethernet, or HBAs for FC. In these you plug an IB cable, going to an IB switch providing connection to other IB HCAs. Of course you're going to need drivers for those in your OS. Yes, these are long-available for Solaris and Linux. Now, what protocols can you talk over IB? There's a range of choices. See, IB isn't accepting package loss like Ethernet does, and hence doesn't need to rely on TCP/IP as a workaround for resends. That is, you still can run IP over IB (IPoIB), and that is used in various cases for control functionality, but the datatransfer can run over more efficient protocols - like native IB. About PCI connectivity: IB cards, as you see are fast. They bring low latency, which is just as important as their bandwidth. Current IB cards run at 56 gbit/s. That is slightly more than double of the capacity of a PCI Gen2 slot (of ~25 gbit/s). And IB cards are equipped usually with two ports - that is, altogether you'd need 112 gbit/s PCI slots, to be able to utilize FDR IB cards in an active-active fashion. PCI Gen3 slots provide you with around ~50gbps. This is why the most IB cards are configured in an active-standby way if both ports are used. Once again the PCI slot is the bottleneck. Anyway, the new Oracle servers are equipped with Gen3 PCI slots, an the new IB HCAs support those too. Oracle utilizes the QDR HCAs, running at 40gbp/s brutto, which translates to a 32gbp/s net traffic due to the 10:8 signal-to-data information ratio. Consolidation techniques: Technology never stops to evolve. Mellanox is working on the 100 gbps (EDR) version already, which will be optical, since signal technology doesn't allow EDR to be copper. Also, I hear you say "100gbps? I will never use/need that much". Are you sure? Have you considered consolidation scenarios, where (for example with Oracle Virtual Network) you could consolidate your platform to a high densitiy virtualized solution providing many virtual 10gbps interfaces through that 100gbps? Technology never stops to evolve. I still remember when a 10mbps network was impressively fast. Back in those days, 16MB of RAM was a lot. Now we usually run servers with around 100.000 times more RAM. If network infrastrucure speends could grow as fast as main memory capacities, we'd have a different landscape now :) You can utilize SRIOV as well for consolidation. That is, if you run LDoms (aka Oracle VM Server for SPARC) you do not have to add physical IB cards to all your guest LDoms, and you do not need to run VIO devices through the hypervisor either (avoiding overhead). You can enable SRIOV on those IB cards, which practically virtualizes the PCI bus, and you can dedicate Physical- and Virtual Functions of the virtualized HCAs as native, physical HW devices to your guests. See Raghuram's excellent post explaining SRIOV. SRIOV for IB is supported since LDoms 3.1.  This post is getting lengthier, so I will rename it to Part I, and continue it in a second post. 

    Read the article

  • Get Your Enterprise Working With Oracle On Track Communication 1.0

    - by Josh Lannin
    The On Track Development team is very pleased to announce that today On Track is available for our customers to download and evaluate.  To learn more about what On Track does start with our whitepaper and datasheet.   If you are a developer, take a look at our documentation and samples posted to our OTN page. For this first blog post, I’ll be speaking to several notable points about our product. Graceful Escalation via Conversations: On Track addresses the “Collaboration Problem” through a single guiding principle – graceful escalation – within the construct of a Conversation. In On Track, collaboration is based on a context (called a “Conversation”) that gracefully escalates in form, structure, and content, as dictated by the particular needs of a given collaboration.  Within that context, On Track provides a rich set of tools to choose from.  These tools provide for communication, coordination, content management, organization, decision making, and analysis -- all essential aspects of collaboration, but not all of them are essential all of the time.  Every collaborative interaction will evolve differently.  Some will evolve to represent work spreading over the course of years and involving a large, distributed team, while others may involve few people and not evolve at all.  Regardless, all collaborative contexts are built from the same parts, utilize the same concepts, and start the same way.  The principle of graceful escalation is that you only use the tools and structure you need; so you only incur the complexity you need. Purposeful Collaboration: Through application integration, On Track Conversations bring enterprise application users the communication and collaboration capabilities required to complete business process.  By association with specific processes or business objects conversations extend the possible interactions and broaden participation to internal or external non-application users and provide a sophisticated interaction experience, all the while enhancing the data set within the owning application.  Purposeful collaboration not only needs to happen in the context of applications, it must support a full range of real-time and long-running interactions to provide the greatest value. Multi Client, Multi Modal: This On Track 1.0 product release includes the same day availability of  multiple clients, including iPhone and iPad applications which are now available on the Apple Store, a fully capable and accessible Outlook Add-In, along with our browser web client.  With each client we have sought to leverage the strengths of each unique device- our iPhone client supports picture and voice posts, the iPad is optimized for meeting room situations and document viewing, and our Outlook add-in allows you to take emails in context and bring them into On Track.  In addition to supporting a diverse array of clients, On Track provides a unified multi modal experience support starting with basic messages moving through to integrated documents with live annotations, snapshots, application sharing, and voice. Next Generation Web Architecture: We believe On Track will help move the bar higher for what users can expect from all web applications, most notably ones that involve real-time activity.  On Track is built from the ground up with an innovative, real-time architecture that leverages the extensive push capabilities of our server.  Whether you are receiving a new message, viewing where crowds of people are collaborating, or doing live annotation on a document with a set of people, that information comes to you immediately without refreshes or moving back and forth between pages.  We’ve leveraged this core architecture across the product experience and raised the user experience bar for this type of application.  As well these capabilities are based on open standards and protocols, and are fully extensible by anyone- enabling sophisticated integrations to be created with a wide variety of both legacy and next-generation applications. Agile Product Development: As a product team we operate using continuous feedback and modified agile development methodologies.  We have thousands of active internal Oracle users who have helped pilot our product for critical business functions, and the On Track product development team uses our product as our primary vehicle for all our collaboration.  Additionally we been working with early access customers who are adopting our technology and providing us valuable feedback - which our process has rapidly realized in improvements to our software.  On Track agility extends to our server as well, which is built to scale, and is very simple to install and configure. We are pleased to make this product announcement and encourage you to join us on Facebook or follow us on Twitter, as well as checking back here for the latest product information.

    Read the article

  • accessing c++ class members with luaplus

    - by cppanda
    i've implemented LuaPlus in my engine eventmanager successfully and really like the flexibility i gained. but i'm still not exactly where i want to by, because i can't link my c++ classes to a lua class. for example i have a Actor class in c++, and i want to be able to create the same class in lua and gain access to members with luaplus, but i can't figure how i can achieve that. Is this actually luaplus built in functionality, or do i have to write my own interface that exchanges data tables between c++ and lua? my current approach would be to fire an event in luascript that creates an new actor class in c++ code, and transfer its id and the data i need to back to lua. when i modify the data i send the modifications back to c++ code again, but i actually thought there's something in luaplus that exposes this functionality already.

    Read the article

  • Accessing UI elements from delegate function in Windows Phone 7

    - by EpsilonVector
    I have the following scenario: a page with a bunch of UI elements (grids, textblocks, whatever), and a button that when clicked launches an asynchronous network transaction which, when finished, launches a delegate function. I want to reference the page's UI elements from that delegate. Ideally I would like to do something like currentPage.getUIElementByName("uielement").insert(data), or even uielement.insert(data), or something similar. Is there a way to do this? No matter what I try an exception is being thrown saying that I don't have permissions to access that element. Is there a more correct way to handle updating pages with data retrieved over network?

    Read the article

  • BizTalk Server 2010 Beta available

    - by Rajesh Charagandla
    BizTalk Server 2010 Beta - Click Here to Download Overview: BizTalk Server 2010 offers significant enhancements to help integrate heterogeneous Line-of-business systems with Windows .NET and SharePoint based applications to optimize user productivity, gain business efficiency and increase agility . BizTalk Server 2010 allow .Net developers to take advantage of BizTalk services right out of the box to rapidly build solutions that need to integrate transactions and data from applications like SAP, Mainframes, MS Dynamics and Oracle. Similarly SharePoint developers can seamlessly use BizTalk services directly through the new Business Connectivity Services in SharePoint 2010. BizTalk Server 2010 includes new data mapping & transformation tool to dramatically reduce the development time to mediate data exchange between disparate systems. It also provide a new single dashboard to manage performance parameters and streamline deployments from development to test to production. BizTalk 2010 includes new, scalable Trading Partner Management (TPM) model with a graphical interface for flexible management of business partner relationships and efficient on-boarding process.

    Read the article

  • What Are Oracle Users Doing to Improve Availability and Disaster Recovery?

    - by jgelhaus
    What Are Oracle Users Doing to Improve Availability and Disaster Recovery? Download the recent database availability survey report, Enterprise Data and the Cost of Downtime, and watch the Webcast highlighting the results. The survey, conducted by the Independent Oracle Users Group, examined more than 350 data managers and professionals regarding planned and unplanned downtime, database high availability, and disaster recovery solutions. The findings will help you learn about: Leading causes of planned and unplanned downtime of Oracle users Different approaches to database high availability Why a majority of Oracle users commonly deploy Oracle Database with Oracle Real Application Clusters, Oracle Active Data Guard, and Oracle GoldenGate Register now to download the report and watch the Webcast.

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • How to ensure apache2 reads htaccess for custom expiry?

    - by tzot
    I have a site with Apache 2.2.22 . I have enabled the mod-expires and mod-headers modules seemingly correctly: $ apachectl -t -D DUMP_MODULES … expires_module (shared) headers_module (shared) … Settings include: ExpiresActive On ExpiresDefault "access plus 10 minutes" ExpiresByType application/xml "access plus 1 minute" Checking the headers of requests, I see that max-age is set correctly both for the generic case and for xml files (which are auto-generated, but mostly static). I would like to have different expiries for xml files in a directory (e.g. /data), so http://site/data/sample.xml expires 24 hours later. I enter the following in data/.htaccess: ExpiresByType application/xml "access plus 24 hours" Header set Cache-control "max-age=86400, public" but it seems that apache ignores this. How can I ensure apache2 uses the .htaccess directives? I can provide further information if requested.

    Read the article

  • Routing PHP memcached calls to Oracle Coherence

    - by cj
    A new post Getting Started with the Coherence Memcached Adaptor from David Felcey shows how PHP memcached calls can automatically be routed to store data in Oracle Coherence 12c. This is possible now Coherence 12.1.3 supports Memcached clients using the Binary Memcached protocol. David's post shows how the Coherence Memcached adaptor can be configured as a proxy service that runs in the Coherence cluster. There's nothing particular to configure in the PHP application, except to enable memcached.use_sasl = 1 So what is Coherence? It is an "in-memory data grid solution", with a number of advanced features. You can read more in the Oracle Coherence 12C Data Sheet.

    Read the article

  • First Foray&ndash;About timeout

    - by SQLMonger
    It has been quite a while since I signed up for this blog site and high time that something was posted.  I have a list of topics that I will be working through and posting.  Some I am sure will have been posted by others, but I will be sticking to the technical problems and challenges that I’ve recently faced, and the solutions that worked for me.  My motto when learning something new has always been “My kingdom for an example!”, and I plan on delivering useful examples here so others can learn from my efforts, failures and successes.   A bit of background about me… My name is Clayton Groom. I am a founding partner of a consulting firm in St. Louis Missouri, Covenant Technology Partners, LLC and focus on SQL Server Data Warehouse design, Analysis Services and Enterprise Reporting solutions.  I have been working with SQL Server since the early nineties, when it still only ran on OS/2. I love solving puzzles and technical challenges.   Enough about me… On to a real problem… SSIS Connection Time outs versus Command Time outs Last week, I was working on automating the processing for a large Analysis Services cube.  I had reworked an SSIS package and script task originally posted by Vidas Matelis that automates the process of adding new and dropping old partitions to/from an Analysis Services cube.  I had the package working great, tested, and ready for deployment.  It basically performs a query against the source system to determine if there is new data in the warehouse that will require a new partition to be added to the cube, and it checks the cube to see if there are any partitions that are present that are no longer needed in a rolling 60 month window. My client uses Tivoli for running all their production jobs, and not SQL Agent, so I had to build a command line file for Tivoli to use to run the package. Everything was going great. I had tested the command file from my development workstation using an XML configuration file to pass in server-specific parameters into the package when executed using the DTExec utility. With all the pieces ready, I updated the dtsconfig file to point to the UAT environment and started working with the Tivoli developer to test the job.  On the first run, the job failed, and from what I could see in the SSIS log, it had failed because of a timeout. Other errors in the log made me think that perhaps the connection string had not been passed into the package correctly. We bumped the Connection Manager  timeout values from 20 seconds to 120 seconds and tried again. The job still failed. After changing the command line to use the /SET option instead of the /CONFIGFILE option, we tested again, and again failure. After a number more failed attempts, and getting the Teradata DBA involved to monitor and see if we were connecting and failing or just failing to connect, we determined that the job was indeed connecting to the server and then disconnecting itself after 30 seconds.  This seemed odd, as we had the timeout values for the connection manager set to 180 seconds by then.  At this point one of the DBA’s found a post on the Teradata forum that had the clues to the puzzle: There is a separate “CommandTimeout” custom property on the Data source object that may needed to be adjusted for longer running queries.  I opened up the SSIS package, opened the data flow task that generated the partition list table and right-clicked on the data source. from the context menu, I selected “Show Advanced Editor” and found the property. Sure enough, it was set to 30 seconds. The CommandTimeout property can also be edited in the SSIS Properties sheet. In order to determine how long the timeout needed to be, I ran the query from the task in the development environment and received a response in a matter of seconds.  I then tried the same query against the production database and waited several minutes for a response. This did not seem to be a reasonable response time for the query involved, and indeed it wasn’t. The Teradata DBA’s adjusted the query governor settings for the service account I was testing with, and we were able to get the response back down under a minute.  Still, I set the CommandTimeout property to a much higher value in case the job was ever started during a time of high-demand on the production server. With this change in place, the job finally completed successfully.  The lesson learned for me was two-fold: Always compare query execution times between development and production environments, and don’t assume that production will always be faster.  With higher user demands, query governors, and a whole lot more data, the execution time of even what might seem to be simple queries can vary greatly. SSIS Connection time out settings do not affect command time outs.  Connection timeouts control how long the package will wait for a response from the server before assuming the server is not available or is not responding. Command time outs control how long a task will wait for results to start being returned before deciding that the server is not responding. Both lessons seem pretty straight forward, and I felt pretty sheepish once I finally figured out what the issue was.  To be fair though, In the 5+ years that I have been working with SSIS, I could only recall one other time where I had to set the CommandTimeout property, and that memory only resurfaced while I was penning this post.

    Read the article

  • Databinding a ListView with Mono for Android

    - by Wallym
    The world lives on data. Data is all around us and in many forms: salespeople need to know what customers have spent; twitter users want to know what their friends are saying. How do we as developers present data to a user? In Android, we use the ListView in its various forms. In this article, we'll look at using a ListView, how we can work with it, then discuss what we need to do to overcome some of the challenges in a mobile environment.Article url: http://visualstudiomagazine.com/articles/2012/09/14/databind-a-listview.aspx

    Read the article

  • Google I/O 2010 - BigQuery and Prediction APIs

    Google I/O 2010 - BigQuery and Prediction APIs Google I/O 2010 - BigQuery and Prediction APIs App Engine 101 Amit Agarwal, Max Lin, Gideon Mann, Siddartha Naidu Google relies heavily on data analysis and has developed many tools to understand large datasets. Two of these tools are now available on a limited sign-up basis to developers: (1) BigQuery: interactive analysis of very large data sets and (2) Prediction API: make informed predictions from your data. We will demonstrate their use and give instructions on how to get access. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 6 0 ratings Time: 57:48 More in Science & Technology

    Read the article

  • Switching my collision detection to array lists caused it to stop working

    - by Charlton Santana
    I have made a collision detection system which worked when I did not use array list and block generation. It is weird why it's not working but here's the code, and if anyone could help I would be very grateful :) The first code if the block generation. private static final List<Block> BLOCKS = new ArrayList<Block>(); Random rnd = new Random(System.currentTimeMillis()); int randomx = 400; int randomy = 400; int blocknum = 100; String Title = "blocktitle" + blocknum; private Block block; public void generateBlocks(){ if(blocknum > 0){ int offset = rnd.nextInt(250) + 100; //500 is the maximum offset, this is a constant randomx += offset;//ofset will be between 100 and 400 int randomyoff = rnd.nextInt(80); //500 is the maximum offset, this is a constant randomy = platformheighttwo - 6 - randomyoff;//ofset will be between 100 and 400 block = new Block(BitmapFactory.decodeResource(getResources(), R.drawable.block2), randomx, randomy); BLOCKS.add(block); blocknum -= 1; } The second is where the collision detection takes place note: the block.draw(canvas); works perfectly. It's the blocks that don't work. for(Block block : BLOCKS) { block.draw(canvas); if (sprite.bottomrx < block.bottomrx && sprite.bottomrx > block.bottomlx && sprite.bottomry < block.bottommy && sprite.bottomry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } // bottom left touching block? if (sprite.bottomlx < block.bottomrx && sprite.bottomlx > block.bottomlx && sprite.bottomly < block.bottommy && sprite.bottomly > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } // top right touching block? if (sprite.toprx < block.bottomrx && sprite.toprx > block.bottomlx && sprite.topry < block.bottommy && sprite.topry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } //top left touching block? if (sprite.toprx < block.bottomrx && sprite.toprx > block.bottomlx && sprite.topry < block.bottommy && sprite.topry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } } The values eg bottomrx are in the block.java file..

    Read the article

  • Can Separation of Duties Deter Cybercrime? YES!

    - by roxana.bradescu
    According to the CERT 2010 CyberSecurity Watch Survey: The public may not be aware of the number of incidents because almost three-quarters (72%), on average, of the insider incidents are handled internally without legal action or the involvement of law enforcement. However, cybercrimes committed by insiders are often more costly and damaging than attacks from outside. When asked what security policies and procedures supported or played a role in the deterrence of a potential cybercriminal, 36% said technically-enforced segregation of duties. In fact, many data protection regulations call for separation of duties and enforcement of least privilege. Oracle Database Security solutions can help you meet these requirements and prevent insider threats by preventing privileged IT staff from accessing the data they are charged with managing, ensuring developers and testers don't have access to production data, making sure that all database activity is monitored and audited to prevent abuse, and more. All without changes to your existing applications or costly infrastructure investments. To learn more, watch our Oracle Database Management Separation of Duties for Security and Regulatory Compliance webcast.

    Read the article

  • jQuery mobile List-View is not working after adding some jquery code [closed]

    - by Kaidul Islam Sazal
    I am using jquery mobile and I have an array makeArrayin jquery and I have created few listview by the values of the array.Everything works fine.But the jquery mobile list-view style is not shown. Rather it is shown an ordinary list view. This is my code: $(document).ready(function(){ var url = "inventory/inventory.json"; var makeArray = new Array(); $.getJSON(url, function(data){ $.each(data, function(index, item){ if(($.inArray(item.make, makeArray)) == -1){ makeArray.push(item.make); $('.upper_case') .append('<li data-icon="list-arrow"> <a href="trade_form.php?='+ item.make +'"><img src="images/car_logo/buick.png" class="ui-li-thumb"/>' + item.make + '</a></li>'); } }); }); });

    Read the article

  • Parallel Computing in .Net 4.0

    - by kaleidoscope
    Technorati Tags: Ram,Parallel Computing in .Net 4.0 Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem: To be run using multiple CPUs A problem is broken into discrete parts that can be solved concurrently Each part is further broken down to a series of instructions Instructions from each part execute simultaneously on different CPUs Parallel Extensions in .NET 4.0 provides a set of libraries and tools to achieve the above mentioned objectives. This supports two paradigms of parallel computing Data Parallelism – This refers to dividing the data across multiple processors for parallel execution.e.g we are processing an array of 1000 elements we can distribute the data between two processors say 500 each. This is supported by the Parallel LINQ (PLINQ) in .NET 4.0 Task Parallelism – This breaks down the program into multiple tasks which can be parallelized and are executed on different processors. This is supported by Task Parallel Library (TPL) in .NET 4.0 A high level view is shown below:

    Read the article

  • What does "general purpose system" mean for Java SE Embedded?

    - by Majid Azimi
    The Oracle website says this about Java SE Embedded license: development is free, but royalties are required upon deployment on anything other than general purpose systems What does "general purpose system" mean here? We have a sensor network around the country. On each box we have installed, there is a micro controller based board that gets data from the environment and send data on serial port to a ARM based embedded board. On this board system there is a Java process which reads and submits data to our central server using JMS. Is this categorized as general purpose system? Sorry I'm asking this here. We are in Iran, there is no Oracle office here to ask.

    Read the article

  • Distributed Transaction Framework across webservices

    - by John Petrak
    I am designing a new system that has one central web service and several site web services which are spread across the country and some overseas. It has some data that must be identical on all sites. So my plan is to maintain that data in the central web service and then "sync" the data to sites. This includes inserts, edits and deletes. I see a problem when deleting, if one site has used the record, then I need to undo the delete that has happened on the other servers. This lead me to idea that I need some sort of transaction system that can work across different web servers. Before I design one from scratch, I would like to know if anyone has come across this sort of problem and if there are any frame works or even design patterns that might aid me?

    Read the article

< Previous Page | 677 678 679 680 681 682 683 684 685 686 687 688  | Next Page >