Search Results

Search found 65997 results on 2640 pages for 'custom post type'.

Page 55/2640 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Custom LightDM sessions to launch an application

    - by zachtib
    I'm trying to set up Ubuntu to act as a kiosk running a custom application, and am trying to get a LightDM session built to automatically start it. Ideally, I'd like to have two sessions available from LightDM. The default would start my application fullscreened, and the other would open a minimal desktop in case any configuration (mostly connecting to a wireless network) needed to be done. I've done a lot of research over the last week on custom LightDM & Gnome sessions. I've got a custom greeter written for LightDM that can start either session, but I can't figure out how to add a specific application to the Gnome session that is ultimately started without just putting a launcher in the global startup directory, and I don't want to do that since I don't want the application starting when they open "configuration mode". Another problem I've run into on my current workaround is that the application doesn't fullscreen properly, which makes me think I'm not starting enough of a gnome session (currently it's just metacity, no panel or anything else). Edit: I found a solution. See http://www.webupd8.org/2011/11/make-applications-autostart-only-in.html

    Read the article

  • Choosing the correct network protocol for my type of game (its Wc3 Warlock style)

    - by Moritz
    I need to code a little game for a school project. The type of the game is like the Warcraft 3 map "Warlock", if anyone doesnt know it, here is a short description: up to ten players spawn into an arena filled with lava, the goal of each player is to push the other players into the lava with spells (basically variations of missiles, aoe nukes, moba spells etc) http://www.youtube.com/watch?v=c3PoO-gcJik&feature=related we need to provide multiplayer-support over the internet, for that reason I am looking for the best network protocol for this type of game (udp, tcp, lock step, client-server...) what the requirements are: - same/stable simulation on all clients - up to ten players - up to ~100 missiles on the field - very low latency since its reaction based (i dont know the method wc3 used, but it was playable with the old servers) what would be nice (if even possible, since the traffic might be too big): - support for soft bodies over the network (with bullet physics), but this is no real requirement I read several articles about the lock step method used for RTS games, this seems to be great, but does it fit for real-time action games too (ping-related)? If anyone has run into the same problems/questions like me, I would be very happy about any help

    Read the article

  • Adding A Custom Dropdown in RCDC for Forefront Identity Manager 2010

    - by Daniel Lackey
    My latest exploration has been FIM 2010 for Identity Management. The following is a post of how to add a custom dropdown for the FIM Portal. I have decided to document this as I cannot find documentation on how to do this anywhere else. I hope that it finds useful to others.   For starters, this was to me not an easy task to figure out. I really would like to know why it is so cumbersome to do something that seems like a lot of people would need to do, but that’s for another day J   The dropdown I wanted to add was for ‘Account Status’ which would display if the account is ‘Enabled’ or ‘Disabled’ in the data source Active Directory. This option would also allow helpdesk users or admins to administer the userAccountControl attribute in AD from the FIM Portal interface.   The first thing I had to do was create the attribute itself. This is done by going to Administration à Schema Management from the FIM 2010 portal. Once here, you click on All Attributes. What is listed here are all attributes and their associated Resource Types in FIM. To create the ‘AccountStatus’ attribute, click on New. As shown below, enter ‘AccountStatus’ with no spaces for the System Name and ‘Account Status’ for the Display Name. The Data Type is going to be ‘Indexed String’. Click Next.           Leave everything on the Localization tab default and click Next.   On the Validation tab as shown below, we will enter the regex expression ^(Enabled|Disabled)?$ with our two desired string values ‘Enabled’ and ‘Disabled’. Click on Finish and then and Submit to complete adding the attribute.       The next step involves associating the attribute with a resource type. This is called ‘Binding’ the attribute. From the Schema Management page, click on All Bindings. From the page that comes up, click on New. As shown below, enter ‘User’ for the Resource Type and ‘Account Status’ for the Attribute Type. This is essentially binding the Account Status attribute to the ‘User’ Resource Type. Click Next.    On the ‘Attribute Override’ tab, type in ‘Account Status’ for the Display Name field. Click Next.   On the ‘Localization’ tab, click Next.   On the ‘Validation’ tab, enter the regex expression ^(Enabled|Disabled)?$ we entered previously for the attribute. Click Finish and then Submit to complete.   Now that the Attribute and the Binding are complete, you have to give users permission to see the attribute on the User Edit page. Go to Administration à Management Policy Rules. Look for the rule named Administration: Administrators can read and update Users and click on it. Once it opens, click on the ‘Target Resources’ tab and look at the section named Resource Attributes. Type in at the end the ‘Account Status’ attribute and check it with the validator. Once done click on OK to save the changes.         Lastly, we need to add the actual dropdown control to the RCDC (Resource Control Display Configuration) for User Editing. Go to Administration à Resource Control Display Configuration. From here navigate until you find the RCDC named Configuration for User Editing RCDC and click on it. The following is what you will see:       First step is to export the Configuration Data file. Click on the Export configuration link and save the file to your desktop of other folder.   Find the file you just exported and open the file in your XML editor of choice. I use notepad but anything will work. Since we are adding a dropdown control, first find another control in the existing file that is already a dropdown in FIM. I used EmployeeType as my example. Copy the control from the beginning tag named <my:Control… to the ending tag </my:Control>. Now take what you copied and paste it in whatever location you desire within the form between two other controls. I chose to place the ‘Account Status’ field after the ‘Account Name’ field. After you paste the control you will need to modify so it looks like this:       Notice where you specify what attribute you are dealing with where it has AccountStatus in the XML. Once you are complete with modifying this, save the file and make sure it is a .xml file.   Now go back to the Configuration for User Editing screen and look at the section named ‘Configuration Data’. Click the ‘Browse’ button and find the XML file you just modified and choose it. Click OK on the bottom of the window and you are done!   Now when you click on a user’s name in the FIM Portal, you should see the newly added dropdown box as below:       Later I will post more about this drop down, specifically on how to automate actually ‘Disabling’ the account in the data source through the FIM Workflows and MAs.   <my:Control my:Name="AccountStatus" my:TypeName="UocDropDownList" my:Caption="{Binding Source=schema, Path=AccountStatus.DisplayName}" my:Description="{Binding Source=schema, Path=AccountStatus.Description}" my:RightsLevel="{Binding Source=rights, Path=AccountStatus}"> <my:Properties> <my:Property my:Name="ValuePath" my:Value="Value"/> <my:Property my:Name="CaptionPath" my:Value="Caption"/> <my:Property my:Name="HintPath" my:Value="Hint"/> <my:Property my:Name="ItemSource" my:Value="{Binding Source=schema, Path=AccountStatus.LocalizedAllowedValues}"/> <my:Property my:Name="SelectedValue" my:Value="{Binding Source=object, Path=AccountStatus, Mode=TwoWay}"/> </my:Properties> </my:Control>

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • NSIS: Access values of combo box of custom page?

    - by Xinus
    I have added custom page to NSIS installer using ini files, here is a code . . . ; Welcome page !insertmacro MUI_PAGE_WELCOME Page custom customPage "" ": custom page" . . . Function customPage GetTempFileName $R0 File /oname=$R0 customPage.ini InstallOptions::dialog $R0 Pop $R1 StrCmp $R1 "cancel" done StrCmp $R1 "back" done StrCmp $R1 "success" done error: MessageBox MB_OK|MB_ICONSTOP "InstallOptions error:$\r$\n$R1" done: FunctionEnd . . . Here is a customPage.ini file ; Ini file generated by the HM NIS Edit IO designer. [Settings] NumFields=2 [Field 1] Type=Label Text=Select Version: Left=4 Right=53 Top=16 Bottom=26 [Field 2] Type=Combobox Text=Combobox ListItems= Left=53 Right=138 Top=14 Bottom=107 I want to set values of combobox dynamically using NSIS script, how can I access combobox in nsis?

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28

    - by pinaldave
    CXPACKET has to be most popular one of all wait stats. I have commonly seen this wait stat as one of the top 5 wait stats in most of the systems with more than one CPU. Books On-Line: Occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if contention on this wait type becomes a problem. CXPACKET Explanation: When a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. There is an organizer/coordinator thread (thread 0), which takes waits for all the threads to complete and gathers result together to present on the client’s side. The organizer thread has to wait for the all the threads to finish before it can move ahead. The Wait by this organizer thread for slow threads to complete is called CXPACKET wait. Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query. Reducing CXPACKET wait: We cannot discuss about reducing the CXPACKET wait without talking about the server workload type. OLTP: On Pure OLTP system, where the transactions are smaller and queries are not long but very quick usually, set the “Maximum Degree of Parallelism” to 1 (one). This way it makes sure that the query never goes for parallelism and does not incur more engine overhead. EXEC sys.sp_configure N'cost threshold for parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Data-warehousing / Reporting server: As queries will be running for long time, it is advised to set the “Maximum Degree of Parallelism” to 0 (zero). This way most of the queries will utilize the parallel processor, and long running queries get a boost in their performance due to multiple processors. EXEC sys.sp_configure N'cost threshold for parallelism', N'0' GO RECONFIGURE WITH OVERRIDE GO Mixed System (OLTP & OLAP): Here is the challenge. The right balance has to be found. I have taken a very simple approach. I set the “Maximum Degree of Parallelism” to 2, which means the query still uses parallelism but only on 2 CPUs. However, I keep the “Cost Threshold for Parallelism” very high. This way, not all the queries will qualify for parallelism but only the query with higher cost will go for parallelism. I have found this to work best for a system that has OLTP queries and also where the reporting server is set up. Here, I am setting ‘Cost Threshold for Parallelism’ to 25 values (which is just for illustration); you can choose any value, and you can find it out by experimenting with the system only. In the following script, I am setting the ‘Max Degree of Parallelism’ to 2, which indicates that the query that will have a higher cost (here, more than 25) will qualify for parallel query to run on 2 CPUs. This implies that regardless of the number of CPUs, the query will select any two CPUs to execute itself. EXEC sys.sp_configure N'cost threshold for parallelism', N'25' GO EXEC sys.sp_configure N'max degree of parallelism', N'2' GO RECONFIGURE WITH OVERRIDE GO Read all the post in the Wait Types and Queue series. Additionally a must read comment of Jonathan Kehayias. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest you all to read the online book for further clarification. All the discussion of Wait Stats over here is generic and it varies from system to system. It is recommended that you test this on the development server before implementing on the production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • How can I use Databound variables in conditional statements within Custom Databound controls?

    - by William Calleja
    I'm developing my custom DataBound Controls that make use of an '<ItemTemplate>' tag and '<%# %>' server tags to generate some data however I need to make a conditional statement within one of my Databound controls as shown below. <custom:DataboundControl runat="server"> <ItemTemplate> <% if(((Dictionary<string, string>)Container.DataItem)["MyVariable"]=="" { %> <!-- Conditional Code Happens Here --> <% } %> </ItemTemplate> </custom:DataboundControl> Right now my code isn't working because the compiler cannot recognize my Container.DataItem variable within a <% %> tag and a <%# %> tag doesn't support conditional statements. What can I use?

    Read the article

  • Help me classify this type of software architecture

    - by Alex Burtsev
    I read some books about software architecture as we are using it in our project but I can't classify the architecture properly. It's some kind of Enterprise Architecture, but what exactly... SOA, ESB (Enterprise Service Bus), Message Bus, Event Driven SOA, there are so many terms in Enterprise software.... The system is based on custom XML messages exchanges between services. (it's not SOAP, nor any other XML based standard, just plain XML). These messages represent notifications (state changes) that are applied to the Domain model, (it's not like CRUD when you serialize the whole domain object, and pass it to service for persistence). The system is centralized, and system participants use different programming languages and frameworks (c++, c#, java). Also, messages are not processed at the moment they are received as they are stored first and the treatment begins on demand. It's called SOA+EDA -:)

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • Is there a way to make `enum` type to be unsigned?

    - by Kirill V. Lyadvinsky
    Is there a way to make enum type to be unsigned? The following code gives me a warning about signed/unsigned comparison. enum EEE { X1 = 1 }; int main() { size_t x = 2; EEE t = X1; if ( t < x ) std::cout << "ok" << std::endl; return 0; } I've tried to force compiler to use unsigned underlying type for enum with the following: enum EEE { X1 = 1, XN = 18446744073709551615LL }; But that still gives the warning.

    Read the article

  • PPF Savings Interest Rate Increased To 8.6% From 8% [India]

    - by Gopinath
    Here is some good news to small Indian investors who save money in Public Provident Fund(PPF) accounts operated in Post Offices and few nationalized banks – returns of your PPF savings are going to increase. Indian government has decided to increase the rate of interest paid to customers from 8% to 8.6%. To put it in numbers, if you have a 2,00,000 of savings in PPF account they are going give returns of 17,200/- per annum compared to 16,000/- returns at 8%. Also the the maximum cap on the investments in to PPF account per annum is increased to 1,00,000 from 70,000/-. PPF is one of the safest debt investments that gives very decent returns, but if you are a salaried employee with PF account then consider investing in Voluntary PF(VPF) instead of PPF as VPF returns are higher than PPF. CC image credit: Dave Dugdale. This article titled,PPF Savings Interest Rate Increased To 8.6% From 8% [India], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Custom http service responds fine to local IP address but NOT to localhost or 127.0.0.1

    - by Scrappydog
    I'm trying to connect to a custom http service written by another developer. The service responds fine on a local IP address and port number. Such as: http://10.1.1.1:1234 but it does NOT respond to http://localhost:1234 or http://127.0.0.1:1234 The service is a simple single function application written in VC++ that takes an http post string and returns another string. I'm trying to all it from C# using HttpWebRequest.GetResponse, but I can reproduce the same problem manually from a web browser... Test environment is Windows 2008 Server. Bottom line I'm looking for some troubleshooting tips to help the other developer fix his code.

    Read the article

  • ubuntu 12.04 how do I select a custom program to open specific file type

    - by user68477
    I want to set a filetype to be opened by a specific program other than ubuntu's default. I usually right click properties open with and select the application I need. This however does not work in this case. The application I want is not shown in the "open with" dialog and there apparently is no way I can browse to it or type in a custom command like I used to in 10.04. Is this a bug that I should file or a feature that can be worked around?

    Read the article

  • Any way to access the type of a Scala Option declaration at runtime using reflection?

    - by Graham Lea
    So, I have a Scala class that looks like this: class TestClass { var value: Option[Int] = None } and I'm tackling a problem where I have a String value and I want to coerce it into that Option[Int] at runtime using reflection. To do so, I need to know that the field is an Option and that the type parameter of the Option is Int. What are my options for figuring out that the type of 'value' is Option[Int] at runtime (i.e. using reflection)? I have seen similar problems solved by annotating the field, e.g. @OptionType(Int.class). I'd prefer a solution that didn't require annotations on the reflection target if possible.

    Read the article

  • Resource interpreted as other but transferred with MIME type text/html.

    - by Zhami
    I'm transferring fragments of HTML via Ajax. Safari (4.0.5) reports: "Resource interpreted as other but transferred with MIME type text/html." The file name of the file I get has a .html extension. The server does set the header for this: Content-Type:text/html As I said, the content is a fragment of HTML, which is injected into the page (with jQuery). The contents of the file look like: <html> ... some valid html ... </html> What else might Safari need to see to make it interpret the received content as HTML? TIA.

    Read the article

  • Created Custom Report in Google Analytics, Primary Account Doesn't See It?

    - by Anagio
    A client shared access with me to their Google Analytics account. I created a custom report which shows up under Custom Reporting for me. I assumed they would also see this report since it was in their account but they sent me a screen shot showing there's no custom report listed. I have already sent them the shortcut link to the custom report configuration. This seems to be the way to share custom reports along with dashboards in GA now. Do custom reports only appear to the accounts (email) that created them? I would think everyone who had access to the account would see the custom report.

    Read the article

  • How to coerce type of ActiveRecord attribute returned by :select phrase on joined table?

    - by tribalvibes
    Having trouble with AR 2.3.5, e.g.: users = User.all( :select => "u.id, c.user_id", :from => "users u, connections c", :conditions => ... ) Returns, e.g.: => [#<User id: 1000>] >> users.first.attributes => {"id"=>1000, "user_id"=>"1000"} Note that AR returns the id of the model searched as numeric but the selected user_id of the joined model as a String, although both are int(11) in the database schema. How could I better form this type of query to select columns of tables backing multiple models and retrieving their natural type rather than String ? Seems like AR is punting on this somewhere. How could I coerce the returned types at AR load time and not have to tack .to_i (etc.) onto every post-hoc access?

    Read the article

  • wordpress add post validation

    - by dskanth
    This is embarrassing, but yet i am surprised to see that there is no validation by default while adding a new post in wordpress. When i don't enter a title and even content, and just hit Publish, it says that the post is published, and when i view the front end, there is no new post. How could wordpress skip the simple validation for adding a post? Atleast i expected a server side validation (if not client side). Don't know why the validation is skipped. It is upto wordpress, whether they incorporate it in the new versions. But i want to know how can i add a javascript (or jquery) validation for adding a post in wordpress. I know it must not at all be difficult. But being new to wordpress, i would like to get some hint. From Firebug, i could see the form is rendering like: <form id="post" method="post" action="post.php" name="post"> ... </form> Where shall i put my javascript validation code?

    Read the article

  • In C#: How to declare a generic Dictionary whose key and value types have a common constraint type?

    - by Marcel
    Hi all, I want to declare a dictionary that stores typed IEnumerable's of a specific type, with that exact type as key, like so: private IDictionary<T, IEnumerable<T>> _dataOfType where T: BaseClass; //does not compile! The concrete classes I want to store, all derive from BaseClass, therefore the idea to use it as constraint. The compiler complains that it expects a semicolon after the member name. If it would work, I would expect this would make the later retrieval from the dictionary simple like: IEnumerable<ConcreteData> concreteData; _sitesOfType.TryGetValue(typeof(ConcreteType), out concreteData); How to define such a dictionary?

    Read the article

  • How do I restrict accepting only one type in my generic method?

    - by kunjaan
    I have a generic function foo, which accepts any type and prints them out. public static <T> T foo(T... arg) { List<T> foo = Arrays.asList(arg); for (T t : foo) { System.out.println(t); } return null; } How do I make sure that the arguments received are of only 1 type. For example, {1,'a',3} should be invalid. It should either be all numbers or all characters.

    Read the article

  • Is it possible in Scala to force the caller to specify a type parameter for a polymorphic method ?

    - by Alex Kravets
    //API class Node class Person extends Node object Finder { def find[T <: Node](name: String): T = doFind(name).asInstanceOf[T] } //Call site (correct) val person = find[Person]("joe") //Call site (dies with a ClassCast inside b/c inferred type is Nothing) val person = find("joe") In the code above the client site "forgot" to specify the type parameter, as the API writer I want that to mean "just return Node". Is there any way to define a generic method (not a class) to achieve this (or equivalent). Note: using a manifest inside the implementation to do the cast if (manifest != scala.reflect.Manifest.Nothing) won't compile ... I have a nagging feeling that some Scala Wizard knows how to use Predef.<:< for this :-) Ideas ?

    Read the article

  • SQL SERVER – Introduction to Wait Stats and Wait Types – Wait Type – Day 1 of 28

    - by pinaldave
    I have been working a lot on Wait Stats and Wait Types recently. Last Year, I requested blog readers to send me their respective server’s wait stats. I appreciate their kind response as I have received  Wait stats from my readers. I took each of the results and carefully analyzed them. I provided necessary feedback to the person who sent me his wait stats and wait types. Based on the feedbacks I got, many of the readers have tuned their server. After a while I got further feedbacks on my recommendations and again, I collected wait stats. I recorded the wait stats and my recommendations and did further research. At some point at time, there were more than 10 different round trips of the recommendations and suggestions. Finally, after six month of working my hands on performance tuning, I have collected some real world wisdom because of this. Now I plan to share my findings with all of you over here. Before anything else, please note that all of these are based on my personal observations and opinions. They may or may not match the theory available at other places. Some of the suggestions may not match your situation. Remember, every server is different and consequently, there is more than one solution to a particular problem. However, this series is written with kept wait stats in mind. While I was working on various performance tuning consultations, I did many more things than just tuning wait stats. Today we will discuss how to capture the wait stats. I use the script diagnostic script created by my friend and SQL Server Expert Glenn Berry to collect wait stats. Here is the script to collect the wait stats: -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS (SELECT wait_type, wait_time_ms / 1000. AS wait_time_s, 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct, ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE','SLEEP_TASK' ,'SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR', 'LOGMGR_QUEUE','CHECKPOINT_QUEUE' ,'REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH','BROKER_TASK_STOP','CLR_MANUAL_EVENT' ,'CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT' ,'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 99 OPTION (RECOMPILE); -- percentage threshold GO This script uses Dynamic Management View sys.dm_os_wait_stats to collect the wait stats. It omits the system-related wait stats which are not useful to diagnose performance-related bottleneck. Additionally, not OPTION (RECOMPILE) at the end of the DMV will ensure that every time the query runs, it retrieves new data and not the cached data. This dynamic management view collects all the information since the time when the SQL Server services have been restarted. You can also manually clear the wait stats using the following command: DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR); Once the wait stats are collected, we can start analysis them and try to see what is causing any particular wait stats to achieve higher percentages than the others. Many waits stats are related to one another. When the CPU pressure is high, all the CPU-related wait stats show up on top. But when that is fixed, all the wait stats related to the CPU start showing reasonable percentages. It is difficult to have a sure solution, but there are good indications and good suggestions on how to solve this. I will keep this blog post updated as I will post more details about wait stats and how I reduce them. The reference to Book On Line is over here. Of course, I have selected February to run this Wait Stats series. I am already cheating by having the smallest month to run this series. :) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Advanced Solution – Wait Type – Day 7 of 28

    - by pinaldave
    Earlier we discussed about the what is the common solution to solve the issue with CXPACKET wait time. Today I am going to talk about few of the other suggestions which can help to reduce the CXPACKET wait. If you are going to suggest that I should focus on MAXDOP and COST THRESHOLD – I totally agree. I have covered them in details in yesterday’s blog post. Today we are going to discuss few other way CXPACKET can be reduced. Potential Reasons: If data is heavily skewed, there are chances that query optimizer may estimate the correct amount of the data leading to assign fewer thread to query. This can easily lead to uneven workload on threads and may create CXPAKCET wait. While retrieving the data one of the thread face IO, Memory or CPU bottleneck and have to wait to get those resources to execute its tasks, may create CXPACKET wait as well. Data which is retrieved is on different speed IO Subsystem. (This is not common and hardly possible but there are chances). Higher fragmentations in some area of the table can lead less data per page. This may lead to CXPACKET wait. As I said the reasons here mentioned are not the major cause of the CXPACKET wait but any kind of scenario can create the probable wait time. Best Practices to Reduce CXPACKET wait: Refer earlier article regarding MAXDOP and Cost Threshold. De-fragmentation of Index can help as more data can be obtained per page. (Assuming close to 100 fill-factor) If data is on multiple files which are on multiple similar speed physical drive, the CXPACKET wait may reduce. Keep the statistics updated, as this will give better estimate to query optimizer when assigning threads and dividing the data among available threads. Updating statistics can significantly improve the strength of the query optimizer to render proper execution plan. This may overall affect the parallelism process in positive way. Bad Practice: In one of the recent consultancy project, when I was called in I noticed that one of the ‘experienced’ DBA noticed higher CXPACKET wait and to reduce them, he has increased the worker threads. The reality was increasing worker thread has lead to many other issues. With more number of the threads, more amount of memory was used leading memory pressure. As there were more threads CPU scheduler faced higher ‘Context Switching’ leading further degrading performance. When I explained all these to ‘experienced’ DBA he suggested that now we should reduce the number of threads. Not really! Lower number of the threads may create heavy stalling for parallel queries. I suggest NOT to touch the setting of number of the threads when dealing with CXPACKET wait. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest reading book on-line for further clarification. All the discussion of Wait Stats over here is generic and it varies by system to system. You are recommended to test this on development server before implementing to production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >