Search Results

Search found 978 results on 40 pages for 'roll'.

Page 15/40 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Combination of Operating Mode and Commit Strategy

    - by Kevin Yang
    If you want to populate a source into multiple targets, you may also want to ensure that every row from the source affects all targets uniformly (or separately). Let’s consider the Example Mapping below. If a row from SOURCE causes different changes in multiple targets (TARGET_1, TARGET_2 and TARGET_3), for example, it can be successfully inserted into TARGET_1 and TARGET_3, but failed to be inserted into TARGET_2, and the current Mapping Property TLO (target load order) is “TARGET_1 -> TARGET_2 -> TARGET_3”. What should Oracle Warehouse Builder do, in order to commit the appropriate data to all affected targets at the same time? If it doesn’t behave as you intended, the data could become inaccurate and possibly unusable.                                               Example Mapping In OWB, we can use Mapping Configuration Commit Strategies and Operating Modes together to achieve this kind of requirements. Below we will explore the combination of these two features and how they affect the results in the target tables Before going to the example, let’s review some of the terms we will be using (Details can be found in white paper Oracle® Warehouse Builder Data Modeling, ETL, and Data Quality Guide11g Release 2): Operating Modes: Set-Based Mode: Warehouse Builder generates a single SQL statement that processes all data and performs all operations. Row-Based Mode: Warehouse Builder generates statements that process data row by row. The select statement is in a SQL cursor. All subsequent statements are PL/SQL. Row-Based (Target Only) Mode: Warehouse Builder generates a cursor select statement and attempts to include as many operations as possible in the cursor. For each target, Warehouse Builder inserts each row into the target separately. Commit Strategies: Automatic: Warehouse Builder loads and then automatically commits data based on the mapping design. If the mapping has multiple targets, Warehouse Builder commits and rolls back each target separately and independently of other targets. Use the automatic commit when the consequences of multiple targets being loaded unequally are not great or are irrelevant. Automatic correlated: It is a specialized type of automatic commit that applies to PL/SQL mappings with multiple targets only. Warehouse Builder considers all targets collectively and commits or rolls back data uniformly across all targets. Use the correlated commit when it is important to ensure that every row in the source affects all affected targets uniformly. Manual: select manual commit control for PL/SQL mappings when you want to interject complex business logic, perform validations, or run other mappings before committing data. Combination of the commit strategy and operating mode To understand the effects of each combination of operating mode and commit strategy, I’ll illustrate using the following example Mapping. Firstly we insert 100 rows into the SOURCE table and make sure that the 99th row and 100th row have the same ID value. And then we create a unique key constraint on ID column for TARGET_2 table. So while running the example mapping, OWB tries to load all 100 rows to each of the targets. But the mapping should fail to load the 100th row to TARGET_2, because it will violate the unique key constraint of table TARGET_2. With different combinations of Commit Strategy and Operating Mode, here are the results ¦ Set-based/ Correlated Commit: Configuration of Example mapping:                                                     Result:                                                      What’s happening: A single error anywhere in the mapping triggers the rollback of all data. OWB encounters the error inserting into Target_2, it reports an error for the table and does not load the row. OWB rolls back all the rows inserted into Target_1 and does not attempt to load rows to Target_3. No rows are added to any of the target tables. ¦ Row-based/ Correlated Commit: Configuration of Example mapping:                                                   Result:                                                  What’s happening: OWB evaluates each row separately and loads it to all three targets. Loading continues in this way until OWB encounters an error loading row 100th to Target_2. OWB reports the error and does not load the row. It rolls back the row 100th previously inserted into Target_1 and does not attempt to load row 100 to Target_3. Then, if there are remaining rows, OWB will continue loading them, resuming with loading rows to Target_1. The mapping completes with 99 rows inserted into each target. ¦ Set-based/ Automatic Commit: Configuration of Example mapping: Result: What’s happening: When OWB encounters the error inserting into Target_2, it does not load any rows and reports an error for the table. It does, however, continue to insert rows into Target_3 and does not roll back the rows previously inserted into Target_1. The mapping completes with one error message for Target_2, no rows inserted into Target_2, and 100 rows inserted into Target_1 and Target_3 separately. ¦ Row-based/Automatic Commit: Configuration of Example mapping: Result: What’s happening: OWB evaluates each row separately for loading into the targets. Loading continues in this way until OWB encounters an error loading row 100 to Target_2 and reports the error. OWB does not roll back row 100th from Target_1, does insert it into Target_3. If there are remaining rows, it will continue to load them. The mapping completes with 99 rows inserted into Target_2 and 100 rows inserted into each of the other targets. Note: Automatic Correlated commit is not applicable for row-based (target only). If you design a mapping with the row-based (target only) and correlated commit combination, OWB runs the mapping but does not perform the correlated commit. In set-based mode, correlated commit may impact the size of your rollback segments. Space for rollback segments may be a concern when you merge data (insert/update or update/insert). Correlated commit operates transparently with PL/SQL bulk processing code. The correlated commit strategy is not available for mappings run in any mode that are configured for Partition Exchange Loading or that include a Queue, Match Merge, or Table Function operator. If you want to practice in your own environment, you can follow the steps: 1. Import the MDL file: commit_operating_mode.mdl 2. Fix the location for oracle module ORCL and deploy all tables under it. 3. Insert sample records into SOURCE table, using below plsql code: begin     for i in 1..99     loop         insert into source values(i, 'col_'||i);     end loop;     insert into source values(99, 'col_99'); end; 4. Configure MAPPING_1 to any combinations of operating mode and commit strategy you want to test. And make sure feature TLO of mapping is open. 5. Deploy Mapping “MAPPING_1”. 6. Run the mapping and check the result.

    Read the article

  • Taking the training wheels off: Accelerating the Business with Oracle IAM by Brian Mozinski (Accenture)

    - by Greg Jensen
    Today, technical requirements for IAM are evolving rapidly, and the bar is continuously raised for high performance IAM solutions as organizations look to roll out high volume use cases on the back of legacy systems.  Existing solutions were often designed and architected to support offline transactions and manual processes, and the business owners today demand globally scalable infrastructure to support the growth their business cases are expected to deliver. To help IAM practitioners address these challenges and make their organizations and themselves more successful, this series we will outline the: • Taking the training wheels off: Accelerating the Business with Oracle IAM The explosive growth in expectations for IAM infrastructure, and the business cases they support to gain investment in new security programs. • "Necessity is the mother of invention": Technical solutions developed in the field Well proven tricks of the trade, used by IAM guru’s to maximize your solution while addressing the requirements of global organizations. • The Art & Science of Performance Tuning of Oracle IAM 11gR2 Real world examples of performance tuning with Oracle IAM • No Where to go but up: Extending the benefits of accelerated IAM Anything is possible, compelling new solutions organizations are unlocking with accelerated Oracle IAM Let’s get started … by talking about the changing dynamics driving these discussions. Big Companies are getting bigger everyday, and increasingly organizations operate across state lines, multiple times zones, and in many countries or continents at the same time.  No longer is midnight to 6am a safe time to take down the system for upgrades, to run recon’s and import or update user accounts and attributes.  Further IT organizations are operating as shared services with SLA’s similar to telephone carrier levels expected by their “clients”.  Workers are moved in and out of roles on a weekly, daily, or even hourly rate and IAM is expected to support those rapid changes.  End users registering for services during business hours in Singapore are expected their access to be green-lighted in custom apps hosted in Portugal within the hour.  Many of the expectations of asynchronous systems and batched updates are not adequate and the number and types of users is growing. When organizations acted more like independent teams at functional or geographic levels it was manageable to have processes that relied on a handful of people who knew how to make things work …. Knew how to get you access to the key systems to get your job done.  Today everyone is expected to do more with less, the finance administrator previously supporting their local Atlanta sales office might now be asked to help close the books for the Johannesburg team, and access certification process once completed monthly by Joan on the 3rd floor is now done by a shared pool of resources in Sao Paulo.   Fragmented processes that rely on institutional knowledge to get access to systems and get work done quickly break down in these scenarios.  Highly robust processes that have automated workflows for connected or disconnected systems give organizations the dynamic flexibility to share work across these lines and cut costs or increase productivity. As the IT industry computing paradigms continue to change with the passing of time, and as mature or proven approaches become clear, it is normal for organizations to adjust accordingly. Businesses must manage identity in an increasingly hybrid world in which legacy on-premises IAM infrastructures are extended or replaced to support more and more interconnected and interdependent services to a wider range of users. The old legacy IAM implementation models we had relied on to manage identities no longer apply. End users expect to self-request access to services from their tablet, get supervisor approval over mobile devices and email, and launch the application even if is hosted on the cloud, or run by a partner, vendor, or service provider. While user expectations are higher, they are also simpler … logging into custom desktop apps to request approvals, or going through email or paper based processes for certification is unacceptable.  Users expect security to operate within the paradigm of the application … i.e. feel like the application they are using. Citizen and customer facing applications have evolved from every where, with custom applications, 3rd party tools, and merging in from acquired entities or 3rd party OEM’s resold to expand your portfolio of services.  These all have their own user stores, authentication models, user lifecycles, session management, etc.  Often the designers/developers are no longer accessible and the documentation is limited.  Bringing together underlying directories to scale for growth, and improve user experience is critical for revenue … but also for operations. Job functions are more dynamic.... take the Olympics for example.  Endless organizations from corporations broadcasting, endorsing, or marketing through the event … to non-profit athletic foundations and public/government entities for athletes and public safety, all operate simultaneously on the world stage.  Each organization needs to spin up short-term teams, often dealing with proprietary information from hot ads to racing strategies or security plans.  IAM is expected to enable team’s to spin up, enable new applications, protect privacy, and secure critical infrastructure.  Then it needs to be disabled just as quickly as users go back to their previous responsibilities. On a more technical level … Optimized system directory; tuning guidelines and parameters are needed by businesses today. Business’s need to be making the right choices (virtual directories) and considerations via choosing the correct architectural patterns (virtual, direct, replicated, and tuning), challenge is that business need to assess and chose the correct architectural patters (centralized, virtualized, and distributed) Today's Business organizations have very complex heterogeneous enterprises that contain diverse and multifaceted information. With today's ever changing global landscape, the strategic end goal in challenging times for business is business agility. The business of identity management requires enterprise's to be more agile and more responsive than ever before. The continued proliferation of networking devices (PC, tablet, PDA's, notebooks, etc.) has caused the number of devices and users to be granted access to these devices to grow exponentially. Business needs to deploy an IAM system that can account for the demands for authentication and authorizations to these devices. Increased innovation is forcing business and organizations to centralize their identity management services. Access management needs to handle traditional web based access as well as handle new innovations around mobile, as well as address insufficient governance processes which can lead to rouge identity accounts, which can then become a source of vulnerabilities within a business’s identity platform. Risk based decisions are providing challenges to business, for an adaptive risk model to make proper access decisions via standard Web single sign on for internal and external customers,. Organizations have to move beyond simple login and passwords to address trusted relationship questions such as: Is this a trusted customer, client, or citizen? Is this a trusted employee, vendor, or partner? Is this a trusted device? Without a solid technological foundation, organizational performance, collaboration, constituent services, or any other organizational processes will languish. A Single server location presents not only network concerns for distributed user base, but identity challenges. The network risks are centered on latency of the long trip that the traffic has to take. Other risks are a performance around availability and if the single identity server is lost, all access is lost. As you can see, there are many reasons why performance tuning IAM will have a substantial impact on the success of your organization.  In our next installment in the series we roll up our sleeves and get into detailed tuning techniques used everyday by thought leaders in the field implementing Oracle Identity & Access Management Solutions.

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • C to C++ Conversion [closed]

    - by Annalyne
    Can someone convert this code to C++, pretty please? :( #include <stdio.h> #include <stdlib.h> #include <time.h> #define WEAPON_ROPE 10 #define WEAPON_REVOLVER 20 #define WEAPON_LEADPIPE 30 #define WEAPON_CANDLESTICK 40 #define WEAPON_KNIFE 50 #define WEAPON_WRENCH 60 #define PEOPLE_MRGREEN 100 #define PEOPLE_MSSCARLET 200 #define PEOPLE_CONLMUSTARD 300 #define PEOPLE_PROFPLUM 400 #define PEOPLE_MISPEACOCK 500 #define PEOPLE_MISWHITE 600 #define PLACE_KITCHEN 1 #define PLACE_HALL 2 #define PLACE_POOLROOM 3 #define PLACE_STUDY 4 #define PLACE_LOUNG 5 #define PLACE_LIBRARY 6 #define PLACE_CONSERVATORY 7 #define PLACE_DINING 8 #define PLACE_BILLIARDS 9 int main() { int die = 0; int players[6][9] = {{0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}}; int allCards[] = {WEAPON_ROPE, WEAPON_REVOLVER, WEAPON_LEADPIPE, WEAPON_CANDLESTICK, WEAPON_CANDLESTICK, WEAPON_KNIFE, WEAPON_WRENCH, PEOPLE_MRGREEN, PEOPLE_MSSCARLET, PEOPLE_CONLMUSTARD, PEOPLE_CONLMUSTARD, PEOPLE_PROFPLUM, PEOPLE_MISPEACOCK, PEOPLE_MISWHITE, PLACE_KITCHEN, PLACE_HALL, PLACE_POOLROOM, PLACE_STUDY, PLACE_LOUNG, PLACE_LIBRARY, PLACE_CONSERVATORY, PLACE_DINING, PLACE_BILLIARDS}; int deckSize = 23; // number of cards in allCards array int count; for (count = 0; count < deckSize; ++count) { printf(", %d", allCards[count]); } // End for // These three array's are so you can put a card back, if need be... int weaponCards[] = {WEAPON_ROPE, WEAPON_REVOLVER, WEAPON_LEADPIPE, WEAPON_CANDLESTICK, WEAPON_CANDLESTICK, WEAPON_KNIFE, WEAPON_WRENCH}; int weaponDeckSize = 7; int peopleCards[] = {PEOPLE_MRGREEN, PEOPLE_MSSCARLET, PEOPLE_CONLMUSTARD, PEOPLE_CONLMUSTARD, PEOPLE_PROFPLUM, PEOPLE_MISPEACOCK, PEOPLE_MISWHITE}; int peopleDeckSize = 7; int placeCards[] = {PLACE_KITCHEN, PLACE_HALL, PLACE_POOLROOM, PLACE_STUDY, PLACE_LOUNG, PLACE_LIBRARY, PLACE_CONSERVATORY, PLACE_DINING, PLACE_BILLIARDS}; int placeDeckSize = 9; srand(clock()); // seed rand() using clock() which gives // the current tick your processor is at... int killer[3]; // no need to initialize yet. killer[0-2] will initialize int deckShuffle = rand() % weaponDeckSize; // picks one number out of the deck killer[0] = weaponCards[deckShuffle]; allCards[deckShuffle] = 0; // Card drawn. No longer exists in deck deckShuffle = rand() % peopleDeckSize; // picks another random card out of the deck killer[1] = peopleCards[deckShuffle]; allCards[deckShuffle + weaponDeckSize] = 0; // Card drawn. No longer exists in deck deckShuffle = rand() % placeDeckSize; // randomly picks the last card needed killer[2] = placeCards[deckShuffle]; allCards[deckShuffle + weaponDeckSize + peopleDeckSize] = 0; // Card drawn. No longer exists in deck int numberOfCards = 0; printf("CLUE\n"); printf("written by John Schintone\n"); printf("Origonal game delvoped by Hasbro\n"); int numberOfPlayers = 0; while ((numberOfPlayers < 3) || (numberOfPlayers > 6)) { printf("How many players are Going to play :\n"); printf("[number] > "); scanf("%d",&numberOfPlayers); // A very fast if statement which only uses integers/char's switch(numberOfPlayers) { case 6: { numberOfCards = 3; } break; case 5: { numberOfCards = 4; } break; case 4: { numberOfCards = 5; } break; case 3: { numberOfCards = 6; } break; default: { printf("You must enter a number between 3 and 6...\n"); } // End default } // End switch } // End while int index1, index2; // Note: ++index1; is faster than index1++; and will almost always // produce better code (index1++ happens after this statement line. // ++index1 increments index1 before this statement line) for (index1 = 0; index1 < numberOfPlayers; ++index1) { printf("Player %d", index1); for (index2 = 0; index2 < numberOfCards; ++index2) { // Remember that allCards[deckShuffle] == 0 because we removed that // card ages ago... works out well, just don't forget you did that : ) while (allCards[deckShuffle] == 0) { deckShuffle = rand() % deckSize; } // End while players[index1][index2] = allCards[deckShuffle]; allCards[deckShuffle] = 0; // Card removed for after loop... printf(", %d", players[index1][index2]); switch(players[index1][index2]) { case WEAPON_ROPE: { } break; // Add more... case PEOPLE_MRGREEN: { } break; // Add more... case PLACE_KITCHEN: { } break; // Add more... default: { printf("Program has caught player %d cheating...", index1); } // End default } // End switch } // End for printf("\n"); } // End for printf("The killer is %d, with the %d, and in the %d \n\n", killer[0], killer[1], killer[2]); printf("Type h for this help... \n"); printf("Type e to escape... \n"); printf("Type r to roll the die... \n"); char command = '\0'; // \0 represents zero, or the null character while (command != 'e') { printf("[one character] > "); scanf("%c", &command); if (command == 'r') { die = rand() % 6 + 1; printf("Your number is: %d \n", die); } // end while if (command == 'h') { printf("Type h for this help... \n"); printf("Type e to escape... \n"); printf("Type r to roll the die... \n"); } // End if printf("\n"); } // End while return(0); // Success. Program worked ok } // End main() Function

    Read the article

  • Debian 3.1 (Sarge) init.d boot order

    - by Adam Lewis
    I am using a TS-7800 single board computer from Technologic Systems that ships with Debian 3.1 (Sarge). I have updated it to Squeeze, but due to various driver issues I have been forced to roll back to Sarge. I am attempting to configure the various drivers and configurations needed for my application services before they start. Ideally I would call one init.d script that contains the drivers / configurations then call the other init.d scripts (one for each process). I am left scratching my head on how to guarantee the boot sequence. I know in later versions of Debian I could use the lbs-header to achieve this; but is there anything comparable to the LBS header in Sarge?

    Read the article

  • Debian 3.1 (Sarge) init.d boot order

    - by Adam Lewis
    I am using a TS-7800 single board computer from Technologic Systems that ships with Debian 3.1 (Sarge). I have updated it to Squeeze, but due to various driver issues I have been forced to roll back to Sarge. I am attempting to configure the various drivers and configurations needed for my application services before they start. Ideally I would call one init.d script that contains the drivers / configurations then call the other init.d scripts (one for each process). I am left scratching my head on how to guarantee the boot sequence. I know in later versions of Debian I could use the lbs-header to achieve this; but is there anything comparable to the LBS header in Sarge?

    Read the article

  • Restore using time machine from an macbook to an macbook pro (first intel)

    - by Anders Nørgaard
    Hello.. My girlfriend have Macbook 10.6.3, the first plastic version. the screen broke an its at service store now. In the mean time, i have tried to restore from hers TM backup to my old macbook pro 10.6.3 (the first intel version). Everything seems to work out fine, but when its finish, it says reboot, but nothing happens. When i hold down the power button, powering down, and starts again, its come up with the grey roll down screen "you need to restart your machine again" in different languages. I have tried the restore procedure over again 2 times, and every time it ends up like this... Anyone have a suggestion what to do ? Thanks - Anders.

    Read the article

  • Using SSL with Openfire

    - by Dan
    I'm having a rough time getting SSL configured properly on an Openfire install. Quite honestly, I just don't know what to do. It seems convoluted on the steps necessary to get a cert imported. Has anyone out there successfully done this? I'm running Openfire 3.6.4 on Server2003 R2. I have a signed UC cert which is ready to roll, I just don't know what to do with it. I've been through tons of tutorials on converting from .crt to .der to .pem, using openssl and java tools, but its only getting more confusing as I go.

    Read the article

  • Automated Deployment of Windows Application

    - by Phillip Roux
    Our development team are looking to automate our application deployment on to multiple servers. We need to control multiple windows servers at once (Stopping Database server & Web Server). We have a ASP.NET project, database upgrade scripts, reports and various Windows services that potentially need to be updated during deployment. Currently we use Jenkins as our CI server to run our unit tests. If something goes wrong in the deployment process, we would like the ability to roll back. What tools are recommended to automate our deployment process?

    Read the article

  • MySQL Replication Error

    - by Ian
    I recently updated my master server to 5.1.41 and noticed that the slave was no longer replicating. It was returning this erorr: 091208 12:53:31 [ERROR] Slave I/O: error connecting to master '[email protected]:3306' - retry-time: 10 retries: 86400, Error_code: 2026 091208 12:53:41 [ERROR] Slave I/O: error connecting to master '[email protected]:3306' - retry-time: 10 retries: 86400, Error_code: 1045 The first error is apparently an SSL error, followed by auth denied.. Thing is, I haven't touched my SSL key or user access in months (and the key is fine, since I'm using the same one on that machine to replicate from other master servers.. Any ideas? Edit: Months later, I've tried with 5.1.44 and the problem is persisting. When I roll back to 5.1.39 replication works great... I guess I can't use anything newer than 5.1.39....

    Read the article

  • SBS 2003 to SBS 2011

    - by Steve
    We've migrated SBS 2003 to SBS 2011...so far everything has gone smoothly. We're in the final phase of migrating the last amount of data over. Is there a way I can check to be sure that our old server isn't restarting due to the 21 day grace period? As our 21 days is up on Thursday. Yet it's restarting on its own now...maybe a power supply issue...? Is there any way to extend the 21 days? or roll it back? or does this take a call to MSFT? Very odd...

    Read the article

  • How to allow users to monitor performance of a set of servers without touching every server?

    - by Jon Seigel
    I'm not a sysadmin, so this may be trivial. We have about 20 Windows Server 2008 R2 VMs we want to monitor centrally using Perfmon. The only issue is that the user account that's going to be doing the monitoring is not (and I assume will never be) in the Administrators group. The servers, and the user account (currently one, but could be more) are all on the same domain. Right now we're running a pilot with 5 of the servers, touching each VM manually to set the permissions, which is already getting cumbersome to manage. If we decide to roll this out to all the servers, we need a scalable solution to control access. What is the most flexible way to accomplish this? I'd like a solution that would work with 200 servers just as easily as the 20 servers we have now.

    Read the article

  • What does the "Max Memory Size" on the new Intel Core i3 / i5 / i7 CPU's mean?

    - by Josh
    I just noticed in the specs of the new Intel Core i-series processors that there is a "Max Memory Size" that is usually pretty small -- anywhere from 8GB to 24GB. See here: http://ark.intel.com/Product.aspx?id=41316 Core 2-based motherboards were just starting to roll out support for 32GB and greater memory sizes. Anyone have any idea what the Max Memory Size indicates? Is this the total limitation of the on-chip memory controller? Limitation per channel? Limitation per stick (e.g. density??)? Thinking of building a decent machine that needs lots of RAM, so I'm looking at the i7 860.

    Read the article

  • What is the simplest way to build your own .deb package?

    - by Calvin Fisher
    Having used Ubuntu for several years now, I've assembled a short list of scripts and packages that I always install on my computers. I would like to pack them up into a .deb to make it easier to get set up on a fresh OS installation. I'm imagining, for instance, one package that would install all of my custom BASH scripts that I've made for common tasks, and another one that would depend on other packages (like w64codecs) that I always install but forget that I need to until I go to do something and it's not there. It doesn't even have to be by-the-book; I'm not looking to deploy these publicly. I'm just looking to roll up all these tasks into one sudo dpkg --install. To quantify "simple" or "easy," I mean to say that I'm looking for the method with the fewest steps requiring the least technical knowledge and, most importantly, taking the least time.

    Read the article

  • HD Video Capture Card w/ Good API?

    - by Sheep Slapper
    Does anyone here know of a good HD video capture card that has a good (comprehensive) API? I administer a few servers that do some video encoding right now, but when we make the switch to HD cameras, they won't be sufficient. In addition to this, the servers we have now are black boxes, closed to me except to start/stop the video capture device. I'd like to be able to roll my own, so we can better integrate it with our existing systems, but I know almost nothing about what kind of HD capture cards are out there, and if I can avoid spending money just to test their APIs that would rock. So does anyone have any experience with this? All our other software is in C#, and I'd like to set up the new servers with web interfaces to start/stop the capture (also in C#, using .NET 3.5 probably). I'm not sure how language specific these APIs would be, but that's what I'm working with just as a reference point. I appreciate any help the community can give!

    Read the article

  • Temporarily Utilizing 304 Header on Apache for Crawlers

    - by Volomike
    I have a client who has a hosting arrangement with 400 customer sites all hosted through SuPHP in CGI mode on Apache. The sysop is now gone and the client is calling on me for rolling out a new PHP thing. Trouble is -- server load is very high right now and we have found that it's due to the crawlers. We had one customer in particular who complained of slow websites, and we engaged a 304 header plugin in his site against most crawlers, and his site perked right up. We'd like to lower that load by issuing a global 304 header to all the crawlers, letting human visitors through. I have a long list of user agent keywords to trap for. What's the best way to temporarily engage that global 304 header, while allowing human visitors to get right on through? I mean, I could roll out 400 .htaccess file changes, but it would be ideal to make this change in like one central Apache config and then it automatically affect all the sites at once.

    Read the article

  • Trying To Uninstall Exchange 2007 From Server - Stuck On "Mailbox Role Checks"

    - by Matthew Hodgkins
    Hi All, I am trying to uninstall Exchange 2007 on a secodary server which has been decommissioned for quite a while (cleaning up after an old Network Admin). The only roll the server had was "Mailbox Role" (the primary server also has this). I have tried uninstalling using Programs and Features GUI tool but that hung on Mailbox Role Checks. I then tried uninstalling Exchange by running Setup.com /mode:uninstall. It has been stuck on 1% of "Mailbox Role Checks" for over 3 hours now. Is there any other options I have for uninstalling Exchange 2007 so the old server can be removed from the Exchange Management console?

    Read the article

  • Internet cafe software for linux

    - by pehrs
    I have gotten a request to roll out a total of 8 internet cafe's in a large network. Budget is non-existent as it will all be done for a non-profit. I was planing to use Ubuntu and live-cds to minimize the amount of management required, but I can't seem to find any suitable internet cafe system that is Ubuntu based. The requirements are pretty basic: It needs to keep track of logged in time and log out users when their time it up. No billing will be done, it will just be used to ensure people can share the computers fairly. It should be possible to force logout from a central system. Users will be unskilled, so it has to have a GUI. What (preferably free, considering the shoe-string budget) software would you suggest to manage this?

    Read the article

  • Windows 7 : Any way to disable "show caracter" in WIFI network properties?!

    - by Fox
    Hi everyone, Here's my issue. I'm working in a school as IT Tech and I'm currently planning to roll out Windows 7 on students laptop. The issue is : When you go to the properties of a WIFI network, you have the fields to input the WIFI key, WPA2 key here in my case, and you also have a checkbox that allow you to "unmask" the caracters of the wifi key. This is actually the problem. Anyone who can access the WIFI network properties, will be able to see the WIFI key, which is really an issue in a school envrironnement where student are all eager to get the key for their precious IPod Touch, what I don't want to happen for obvious reasons... So, is there a way to disable that checkbox or else, make the field cleared out when the checkbox is checked, just like it was on Windows XP or Vista? Thanks all for your answer.

    Read the article

  • Slow Network Performance with Windows Server 2008 SP1

    - by Axeva
    I recently installed Service Pack 1 for Windows Server 2008. Since that time, network performance has been awful. Both Windows 7 and Mac Snow Leopard clients have seen miserable speeds when trying to read or write to the server. This is the exact update: Windows Server 2008 R2 Service Pack 1 x64 Edition (KB976932) It's a very simple file server setup. No Domain or Active Directory. Essentially just shared folders. It's Windows Web Server that I'm running. Are there any settings I can tweak? Should I roll back the update (doesn't seem wise)? Update: I've turned off the Power Management for the Network Adapter. That may help. If it doesn't have to be powered on at the start of a request, it should speed things up. Or so I would assume.

    Read the article

  • Problem Adding Windows 7 64-bit print drivers to 32-bit Windows 2003 Print Server

    - by Richard West
    I have installed the final RTM version of Windows 7 professional 64 bit on a test system before we begin the roll out in our company. I'm having problems connecting to several HP printers that we have on the network. These printers are being shared from a Windows 2003 server host. I have downloaded the lastest HP Universal Printer dirver, however I'm unable to add the 64 bit driver onto the 2003 server system (it's 32 bit). Does anyone have any advice on how I can get connected to these printers from the Windows 7 system?

    Read the article

  • Internet Explorer 9 takes a long time to load websites

    - by Steve
    IE9 on Windows 7 would load Google okay, and we could search on Google okay, but almost any other website would take an inordinate amount of time to load. The loading sprite would sit there indefinitely. I uninstalled IE9 to roll back to IE8, and the same issue occurs. We've reset IE settings back to defaults, and there are no add-ons causing this. Firefox loads websites fine on this computer. Could it be an IE-specific virus/trojan? IE was displaying an incorrect/hijacked home page.

    Read the article

  • Automatically select last row in a set in Excel

    - by Luke
    In Excel 2003, I am trying to keep track of some petty cash, and have it set up with the denomination along the top row, along with a sub total and difference column. I want a small section that shows how many rolls of coins I should have, by taking the total amount, and dividing it by however many should be in a roll, and rounding to the lowest whole number. That part is fine. What I want done is for that ONE section (how many rolls) I should have, based on the last row that has information in it. For example, if the last row is row 13, it should read the data from B13, C13, D13, etc. I don't mind learning Macros, if that's what the solution requires. I don't want to be manually selecting the last row each time though, I just want the worksheet to know automatically.

    Read the article

  • Sending email from an alternative domain to protect my "core" domain from spam filters

    - by Jack7890
    I run a website (seatgeek.com) that sends a lot of transactional email to users--account updates, alerts, etc. It's important to us that our domain remains clean in the eyes of spam filters. We'd like to roll out an email marketing campaign. It's nothing particularly spammy, but this would be the first time we ever emailed to people who hadn't expressly asked to receive email from us. It's to market a new product we built to a specific niche of professionals. In order to protect our domain in the eyes of spam filters, we're considering sending the marketing email from an alternative domain. The alternative domain is an alternative landing page we sometimes use for this new product. Is there any way this could backfire on us? Does it seem like a particularly poor idea?

    Read the article

  • Exchange 2013 Internal Relay via Smart Host

    - by Matt Clements
    Thank you for your help in advance! I am currently setting up an Exchange 2013 server, to replace our old POP3/SMTP system, however we would like to roll this out gradually when convenient for our staff. Our plan is therefore: Setup Exchange 2013 to retrieve email via POP Connector - Done Setup Exchange 2013 to send ALL mail via a SmartHost - Issues I have set the domains in Mail Flow Accepted Domains to Internal Relay, enabled a Smart Host for * as the domain name, and disabled/deleted the accounts that are not setup yet; however Exchange just bounces the emails with no errors.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >