Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 480/842 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • How/where to find all Update (msu) packages for Windows 7 (Enterprise)

    - by Rudi
    Hi, I've been using Wim2Vhd to create native boot vhd files. (I'm using this to keep several development environments ready, I'm a developer, I know -- I need help ;-) Now the first boot always ends up in several minutes installing all of the windows updates... I' know I could avoid this if I had a massive list of qfe (.msu) packages that I could supply to the command line of Wim2Vhd. Where can I find all of the updates so I can download them. I have found this: http://www.megaleecher.net/Downloading_Microsoft_Windows_7_Offline_Updates but I'd like a more official source and I'd like an easier way to download them all. I'm a developer, so if I find the information (web service?) to get all the current packages, I'll build a tool to download them to a single location and generate a list of them for use with Wim2Vhd.

    Read the article

  • Oracle Endeca Information Discovery 3.1 is Now Available

    - by p.anda
    Oracle Endeca Information Discovery (OEID) 3.1 is a major release that incorporates significant new self-service discovery capabilities for business users. These include agile data mashup, extended support for unstructured analytics, and an even tighter integration with Oracle BI This release is available for download from: Oracle Delivery Cloud Oracle Technology Network Some of the what's new highlights ... Self-service data mashup... enables access to a wider variety of personal and trusted enterprise data sources. Blend multiple data sets in a single app. Agile discovery dashboards... allows users to easily create, configure, and securely share discovery dashboards with intelligent defaults, intuitive wizards and drag-and-drop configuration. Deeper unstructured analysis ... enables users to enrich text using term extraction and whitelist tagging while the data is live. Enhanced integration with OBI... provides easier wizards for data selection and enables OBI Server as a self-service data source. Enterprise-class data discovery... offers faster performance, a trusted data connection library, improved auditing and increased data connectivity for Hadoop, web content and Oracle Data Integrator. Find out more ... visit the OEID Overview page to download the What's New and related Data Sheet PDF documents. Have questions or want to share details for Oracle Endeca Information Discovery?  The MOS Communities is a great first stop to visit and you can stop-by at MOS OEID Community.

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Search SSIS packages for table/column references

    - by Nigel Rivett
    A lot of companies now use TFS or some other system and keep all their packages in a single project. This means that a copy of all the packages will end up on your local disk. There is major failing with SSIS that it is sometimes quite difficult to find what a package is actually doing, what it accesses and what it affects. This is a simple dos script which will search through all packages in a folder for a string and write the names of found packages to an output file. Just copy the text to a .bat file (I use aaSearch.bat) in the folder with all the package scripts Change the output filename (twice), change the find string value and run it in a dos window. It works on any text file type so you can also search store procedure scripts – but there are easier ways of doing that. echo. > aaSearch_factSales.txt for /f “delims=” %%a in (‘dir /B /s *.dtsx’) do call :subr “%%a” goto:EOF :subr findstr “factSales” %1 if %ERRORLEVEL% NEQ 1 echo %1 >> aaSearch_factSales.txt goto:EOF

    Read the article

  • Caching DNS server (bind9.2) CPU usage is so so so high.

    - by Gk
    Hi, I have a caching-only dns server which get ~3k queries per second. Here is specs: Xeon dual-core 2,8GHz 4GB of RAM Centos 5x (kernel 2.6.18-164.15.1.el5PAE) bind 9.4.2 rndc status: recursive clients: 666/4900/5000 About 300 new queries (not in cache) per second. Bind always uses 100% on one core on single-thread config. After I recompiled it to multi-thread, it uses nearly 200% on two core :( No iowait, only sys and user. I searched around but didn't see any info about how bind use CPU. Why does it become bottleneck? One more thing, here is RAM usage: cat /proc/meminfo MemTotal: 4147876 kB MemFree: 1863972 kB Buffers: 143632 kB Cached: 372792 kB SwapCached: 0 kB Active: 1916804 kB Inactive: 276056 kB I've set max-cache-size to 0 to make sure bind can use as much RAM as it want, but it always stop at ~2GB. Since every second we got not cached queries so theoretically RAM must be exhausted but it wasn't. Do you have any idea? TIA, -Gk

    Read the article

  • Beginners Guide to Client Application Services

    - by mbcrump
    What is it? Client application services make it easy for you to create Windows-based applications that use the ASP.NET AJAX login, roles, and profile application services included in the Microsoft ASP.NET 2.0 AJAX Extensions. These services enable multiple Web and Windows-based applications to share user information and user-management functionality from a single server.   What can you do with it? Authenticate a user. You can use the authentication service to verify a user's identity. Determine the role or roles of an authenticated user. You can use the roles service to change the user interface of your application depending on the user's role. For example, you can provide additional features for users who are in an administrator role. Store and access per-user application settings located on the server. You can use the Web settings service (also known as the profile service) to share settings across multiple applications and locations. Client application services take advantage of the Web services extensibility model through client service providers that you can specify in your application configuration files. These service providers include offline functionality that uses a local cache for authentication, roles, and settings data when a network connection is unavailable. Give me an example of where I would use this! Sharing login and user role information between a Windows Form application and a ASP.NET application. How do I configure it? Click Here

    Read the article

  • "Sticky button" configuration under X11/Ubuntu?

    - by John Whitley
    Is there a configuration or app that will enable Sticky-keys like functionality for a pointer button under X11? (On Ubuntu 9.10, FWIW.) To be clear, I'd like a single tap (down/up events) to be treated as a down event, and a following tap to be treated as an up event. Context: I have a trackball with a fourth button that I've mapped to act as horizontal/vertical scroll. This works great. It'd be even better if I didn't have to hold the button down when scrolling.

    Read the article

  • Dovecot ignoring maximum number of IMAP connections

    - by Michelle
    I have a single mailbox mail server running Dovecot/Postfix and I have two IMAP clients, Thunderbird on the PC and K9 on Android. I keep on receiving this error in my logs even after I change the 'mail_max_userip_connections' variable to 50. puppet dovecot: imap-login: Maximum number of connections from user+IP exceeded (mail_max_userip_connections=10): user=<[email protected]>, method=PLAIN, rip=62.242.90.2, lip=198.29.31.229, TLS Why does it say that it is set to 10 in the log? Is that hardcoded? grep -r "mail_max_userip_connections" /etc/dovecot /etc/dovecot/conf.d/20-managesieve.conf: #mail_max_userip_connections = 10 /etc/dovecot/conf.d/20-pop3.conf: #mail_max_userip_connections = 3 /etc/dovecot/conf.d/20-imap.conf: mail_max_userip_connections = 50 I've restarted dovecot after making the changes but this error is still logged and I can't access the mailbox. Can anyone help me understand why I can't seem to raise the maximum limit?

    Read the article

  • Web-based (intranet / non-hosted) timesheet / project tracking tools

    - by warren
    I realize some similar questions have been asked along these lines before, but from reading-through them today, it appears they don't match my use case. I am looking for a web-based, non-hosted time and project tracking tool. I've downloaded Collabtive and Achievo so far, but am looking for other suggestions, too. My list of requirements: runs on standard LAMP stack non-hosted (ie, there is an option to download and run it on a local server) not a desktop/single-user application easy-to-use - my audience is a mix of technical and non-technical folks easy to maintain - when time for upgrading comes, I'd really like to not have to rebuild the app (a la ./configure ; make ; make install) needs to support multiple users free-form project additions: we don't have a central project management authority (users should be able to add whatever they're working on, not merely from a drop-down) Does anyone here have experience with such tools? It doesn't have to be free.. but free is always nice :)

    Read the article

  • Z Order in 2D with orthographic projection and texture atlas

    - by Carbon Crystal
    I am working with a 2D game in OpenGL ES and have a question about z-order together with a texture atlas. I am using an orthographic projection because I want pixel-perfect rendering of 2D sprites, however from what I can determine the draw order is really the only thing that will determine which textures (sprites) appear above or below their neighbors. That is, the "z-index" is a function of the order in which the textures are drawn as opposed to the z coordinate on the vertex array being drawn. So.. I have a texture atlas to save binding multiple textures for each draw call but this immediately creates a problem if there is more than one atlas in play. If I need to draw textures from more than one atlas (typically the case if I have too many sprites to fit in a single atlas of a reasonable size), then I can't maintain a "draw order" across atlases unless I want to bind/unbind the atlas textures more than once.. which kinda defeats the purpose. Does anyone have any clues as to what the best approach is here? Currently I'm running under an assumption that I will have to declare different fixed "depths" (e.g foreground, background etc) in my 2D scene and assume that the z-order for sprites at a given depth is the same. Then I can have as many atlases as I need at each depth and simply draw the depths in order (along with their associated atlases) I'd love to hear what other people are doing.

    Read the article

  • Dev Lead Job opening on my team

    My product unit (Parallel Developer Tools) is hiring a developer lead here in Redmond. This position is specifically on the debugger feature team that I "Program Manage".So, if you have what it takes and don't mind working with me every single day, click on the link below to read more and apply. You can also send me your resume and I'll make sure it gets to the right place and that you get a prompt response.There is a very long job description on the Microsoft careers site under job id 707388.Here is an excerpt from the middle (emphasis mine):"...We are in search of a talented and innovative senior lead software design engineer to own development of the debugging tools for data parallelism (including GP-GPU) and HPC Clusters being built by our team.To be successful, you need to be able to guide careers, design and architect well, communicate and share the best development practices, collaborate with your peers, contribute to the vision, and code significant portions of the solution. We want to hear from you if you're passionate about making your mark in the parallel development space, improving people, and building world-class tools."Responsibilities include:Managing a team of senior and junior developersDesign and coding high-quality software..."For the full background story, requirements, qualifications and responsibilities please visit the official page. Comments about this post welcome at the original blog.

    Read the article

  • Tie stock quote value to cell in Excel 2011 Mac

    - by vedantchandra
    I've been working on a mock stock portfolio in Excel, and I've been looking for ways to automatically update the data, eg. stock price and P/E ratio. I have tried using a web query to MSN Money, but that just brings up the whole stock quote across multiple cells, I want data to be updated in individual cells only. The only web query solution I can think of is if someone hosted a website where each value in the stock quote was saved on a different HTML file. I could then WebQuery to that file for each cell requiring that value. However, no website offers this. So in essence, is there any tool on Excel 2011 Mac that will let me pull individual values from a stock quote and assign them to a single cell?

    Read the article

  • It it possible to have multiple ReWrite rules that all do the same Action, for an IIS7.5 webserver?

    - by Pure.Krome
    I've got rewrite module working great for my IIS7.5 site. Now, I wish to add a number of urls that all goto an HTTP 410-Gone status. Eg. <rule name="Old Site = image1" patternSyntax="ExactMatch" stopProcessing="true"> <match url="image/loading_large.gif"/> <match url="image/aaa.gif"/> <match url="image/bbb.gif"/> <match url="image/ccc.gif"/> <action type="CustomResponse" statusCode="410" statusReason="Gone" statusDescription="The requested resource is no longer available" /> </rule> but that's invalid - the website doesn't start saying there's a rewrite config error. Is there another way I can do this? I don't particularly want define a single URL and ACTION for each url.

    Read the article

  • Where&rsquo;s my start button?

    - by Dennis Vroegop
    I have to be honest here for a moment. The one thing people most complain about when they talk about Windows 8 is that they miss the Start Button. You know, that dreaded thing that everybody hated when it was introduced… I usually don’t go into these kinds of discussions unless I am personally involved but this one I cannot let go. Why are people doing this? Windows 8 is a great OS. They have changed, updated and perfected so many things so there is enough to talk or write about. Yet, all articles or discussions come down to “Where’s my start button?” In order to save myself from having to explain this every single time I wrote this post and from now on I will simply refer to this blog when I get asked that question. Here it is. Your start menu is there. It’s right in front of your nose. It’s two dimensional, it’s got huge buttons (although they are more than just buttons, they’re alive and therefore called Live Tiles). Just go through those tiles and click what ever you want to start up. Don’t want to look for an item? Just start typing. Really it is that simple. When you are on the start screen just start typing (part of) the name of the program you want and you’ll find it.  As you see in the attached example I started typing “word” and it found Word, Wordfeud, Wordament etc. If you want to find something else besides a program (say you want to change the region you’re in) just click on Settings (it will already show you how many hits there are in that section). People, my request is: dive into something before you complain about it. Look around. This feature is so much easier to use than the old stuff. But you have to know about it. So. I won’t get into this discussion anymore.

    Read the article

  • Lessons From OpenId, Cardspace and Facebook Connect

    - by mark.wilcox
    (c) denise carbonell I think Johannes Ernst summarized pretty well what happened in a broad sense in regards to OpenId, Cardspace and Facebook Connect. However, I'm more interested in the lessons we can take away from this. First  - "Apple Lesson" - If user-centric identity is going to happen it's going to require not only technology but also a strong marketing campaign. I'm calling this the "Apple Lesson" because it's very similar to how Apple iPad saw success vs the tablet market. The iPad is not only a very good technology product but it was backed by a very good marketing plan. I know most people do not want to think about marketing here - but the fact is that nobody could really articulate why user-centric identity mattered in a way that the average person cared about. Second - "Facebook Lesson" - Facebook Connect solves a number of interesting problems that is easy for both consumer and service providers. For a consumer it's simple to log-in without any redirects. And while Facebook isn't perfect on privacy - no other major consumer-focused service on the Internet provides as much control about sharing identity information. From a developer perspective it is very easy to implement the SSO and fetch other identity information (if the user has given permission). This could only happen because a major company just decided to make a singular focus to make it happen. Third - "Developers Lesson" -  Facebook Social Graph API is by far the simplest API for accessing identity information which also is another reason why you're seeing such rapid growth in Facebook enabled Websites. By using a combination of URL and Javascript - the power a single HTML page now gives a developer writing Web applications is simply amazing. For example It doesn't get much simpler than this "http://api.facebook.com/mewilcox" for accessing identity. And while I can't yet share too much publicly about the specifics - the social graph API had a profound impact on me in designing our next generation APIs.  Posted via email from Virtual Identity Dialogue

    Read the article

  • Nginx routing script for NodeJS and Wordpress

    - by Nilay Parikh
    We are moving blogs and site from wordpress to nodejs and ready to move into production. However I'm not able to figure it out how to implement routing from front server (Nginx) to NodeJS (prefered web instance) and if data not synced yet into NodeJS website than (404 will throw by NodeJS) fall back to (using reverse proxy) to Wordpress and serve page, during the transformation period. Q1. Is the approach good for the scenario, or anyone can suggest better approach? Q2. Should NodeJS treat itself as Reverse proxy (using bouncy : https://github.com/substack/bouncy or similar package) in event of fall back or shoud stick with Nginx to do so using fastcgi approch. Both NodeJS and Wordpress are on single server only, In first scenario, /if resource available than serve directly User -> Nginx -> NodeJS (8080) \if resource not available then reverse query wordpress and serve content second scenario, /if resource available than serve directly User -> Nginx -> NodeJS (8080) \if resource not available then 404 to Nginx and Nginx script fallback to Wordpress (FastCGI PHP) Later we have plan to phase out Wordpress and PHP from the server environment completely. I'd like to see any examples of Nginx or Varnish scripts and/or NodeJS scripts if you have for me to refer. Thanks.

    Read the article

  • How do I account for changed or forgotten tasks in an estimate?

    - by Andrew
    To handle task-level estimates and time reporting, I have been using (roughly) the technique that Steve McConnell describes in Chapter 10 of Software Estimation. Specifically, when the time comes for me to create task-level estimates (right before coding begins on a project), I determine the tasks at a fairly granular level so that, whenever possible, I have no tasks with a single-point, 50%-confidence estimate greater than four hours. That way, the task estimation process helps with constructing the software while helping me not to forget tasks during estimation. I come up with a range of hours possible for each task also, and using the statistical calculations that McConnell describes along with my historical accuracy data, I can generate estimates at other confidence levels when desired. I feel like this method has been working fairly well for me. We are required to put tasks and their estimates into TFS for tracking, so I use the estimates at the percentage of confidence I am told to use. I am unsure, however, what to do when I do forget a task, or I end up needing to do work that does not neatly fall within one of the tasks I estimated. Of course, trying to avoid this situation is best, but how do I account for forgotten/changed tasks? I want to have the best historical data I can to help me with future estimates, but right now, I basically am just calculating whether I made the 50%-confidence estimate and whether I made it inside the ranged estimate. I'll be happy to clarify what I'm asking if needed -- let me know what is unclear.

    Read the article

  • Does kern.hz still have any relevance in FreeBSD if "dynamic tick mode" is enabled?

    - by Frerich Raabe
    I'm running a FreeBSD 9.0 setup as a virtual machine in a KVM setup. In previous versions of FreeBSD it was common to force the kern.hz setting to a lower value so that the virtual machine does not keep the host busy because it's handling timer interrupts without having any work to do - the FreeBSD Handbook explains: The most important step is to reduce the kern.hz tunable to reduce the CPU utilization of FreeBSD under the Parallels environment. This is accomplished by adding the following line to /boot/loader.conf: kern.hz=100 Without this setting, an idle FreeBSD Parallels guest OS will use roughly 15% of the CPU of a single processor iMac®. After this change the usage will be closer to a mere 5%. However, in FreeBSD 9, the "dynamic tick mode" (aka "tickless mode") is the default, controlled by the kern.eventtimer.periodic setting which defaults to 0 (read: tickless mode). This makes me wonder - does the tip of lowering kern.hz still have any relevance for making FreeBSD 9 play nicely in a virtual machine setup?

    Read the article

  • What are the boundaries of the product owner in scrum?

    - by Saeed Neamati
    In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. How granular could/should PO get into the analysis and design? For example, a user story might be: "As a customer, I'd like to be able to buy new domains online". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?

    Read the article

  • Group policy issues

    - by Alex Berry
    We are having an issue on one of our clients relatively new sbs installs. The domain consists of a single SBS 2011 server with 4 windows 7 clients and 3 xp clients. Most of the time everything is fine however roughly every 3 days windows 7 clients start timing out when trying to receive computer group policy. This results in hour long delays before getting to the login screen in the morning. This is accompanied by event ID 6006, win login errors stating it took 3599 seconds to process policy. Once they've booted they can log in without issue however gpupdate fails again on computer policy and gpresult comes back with access denied, even when run as domain admin... At this point if we restart the server the network is fine for 3 days. I thought perhaps it might be ipv6 or smb2, but disabling ipv6 on the clients doesn't help and the clients can browse the sysvol folder freely on smb2 anyway. Does anyone have any ideas or routes I can take to further diagnose the issue? Thanks in advance :)

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • What is "2LUN" mode in connection with RAID?

    - by naxa
    I've came across RAID products that also list JBOD (just a bunch of disks) mode and 2LUN mode. What the heck is 2LUN mode? I could not find a description; the closest thing seems to be LUN 'logical unit number' but I don't get the 2LUN thing. UPDATE 1 This is what Wikipedia has to say about JBOD: JBOD (derived from "just a bunch of disks"): an architecture involving multiple hard drives, while making them accessible either as independent hard drives, or as a combined (spanned) single logical volume with no actual RAID functionality. So JBOD can actually mean two different (albeit related) things. Answer of Guest says 2LUN means no spanning. Does this suggest that 2LUN would simply mean the JBOD-variant with no span?

    Read the article

  • How to sync Ubuntu/software/configurations between N computers with free software and/or without a cloud?

    - by skanatek
    Note: this question is not about syncing data in a Dropbox-like way (files, folders), it is more about syncing configurations. I would like to have exactly the same version of Ubuntu with all the software installed and configured both on my Desktop PC and on my Laptop PC (and maybe on my small netbook PC) without using Ubuntu Sync and with minimal maintenance effort (setup once, run for a long time). The use case is the following: I work on my Laptop PC and do some changes to software configuration, for example: configure vim to have a new plugin update the Search Tracker / Recoll file search index configure Thunderbird to have an additional IMAP account ('remember password') add some new bookmarks in Firefox/Chrome change the desktop background image install new software with apt-get install build and install new software with checkinstall etc. I do some 'sync' operation I switch to my Desktop PC and get all the changes from (1) working on the Desktop PC I work on my Desktop PC and do some changes to software configuration, for example: add new directory to the list of directories to be backed up by DejaDup add a new check spelling dictionary to the Libreoffice Writer configure the Terminator software to have colored fonts install new font into the Ubuntu system configure Ekiga to make phone calls etc. I do some 'sync' operation I switch to my Laptop PC and get all the changes from (1) and (4) working on the Laptop PC. Question: What free/open-source software can I use to sync both machines' Ubuntu systems, installed software and configurations? Is it possible to do that without any cloud services? Complementary question: It is obvious that the Desktop PC and the Laptop PC have different hardware configurations. How does the 'sync software' in question deal with video drivers, wlan drivers and their configurations? Note: I do not need all the PCs to be synced at the same time, because I work with only one single machine at once. Note: I considered to use Chef to solve the problem, but it seems that it might be really cumbersome to maintain such a setup. Note: I also considered using a bootable USB with Ubuntu installed (portable Linux), but I am not sure that the video drivers will work then.

    Read the article

  • EJB Persist On Master Child Relationship

    - by deepak.siddappa(at)oracle.com
    Let us take scenario where in users wants to persist master child relationship. Here will have two tables dept, emp (using Scott Schema) which are having master child relation.Model Diagram: Here in the above model diagram, Dept is the Master table and Emp is child table and Dept is related to emp by one to n relationship. Lets assume we need to make new entries in emp table using EJB persist method. Create a Emp form manually dropping the fields, where deptno will be dropped as Single Selection -> ADF Select One Choice (which is a foreign key in emp table) from deptFindAll DC. Make sure to bind all field variables in backing bean.Employee Form:Once the Emp form created, If the persistEmp() method is used to commit the record this will persist all the Emp fields into emp table except deptno, because the deptno will be passed as a Object reference in persistEmp method  (Its foreign key reference). So directly deptno can't be passed to the persistEmp method instead deptno should be explicitly set to the emp object, then the persist will save the deptno to the emp table.Below solution is one way of work around to achieve this scenario -Create a method in sessionBean for adding emp records and expose this method in DataControl.     For Ex: Here in the below code 'em" is a EntityManager.            private EntityManager em - will be member variable in sessionEJBBeanpublic void addEmpRecord(String ename, String job, BigDecimal deptno) { Emp emp = new Emp(); emp.setEname(ename); emp.setJob(job); //setting the deptno explicitly Dept dept = new Dept(); dept.setDeptno(deptno); //passing the dept object emp.setDept(dept); //persist the emp object data to Emp table em.persist(emp); }From DataControl palette Drop addEmpRecord as Method ADF button, In Edit action binding window enter the parameter values which are binded in backing bean.     For Ex:     If the name deptno textfield is binded with "deptno" variable in backing bean, then El Expression Builder pass value as "#{backingbean.deptno.value}"Binding:

    Read the article

  • Reverting problems caused by checkinstall with gcc build

    - by slavik262
    I recently downloaded the GCC 4.6.2 source in order to play around a bit with C++11. Having been told about checkinstall and its usefulness in installing programs from source, I created a Debian package from the install using sudo checkinstall -D make install. Wanting to see how well the newly created package worked, I removed it using Synaptic Package Manager. As it turns out, the package checkinstall created from make install tried to remove every single file the installation process touched, including shared gcc libraries like /lib64/libgcc_s.so. Despite not being able to run a bunch of programs due to this missing dependency, I was able to restore my system back to normal by reinstalling the package from command line using dpkg. At this point I want to remove the package from the package manager since it's so dangerous, but not remove the installed files. I was looking around in /var/lib/dpkg and found that the package manager seems to be based on text files which list packages and such - can I just remove all mention of the package from the files in /var/lib/dpkg, or is there a safer way to go about this?

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >