Search Results

Search found 4240 results on 170 pages for 'delayed execution'.

Page 64/170 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Cool projects at Codeplex

    - by Tiago Salgado
    There’s a significant number of useful projects at Codeplex who were meeting with some of our needs. As such, here are two that already have been useful to me: Droid Explorer As the name implies, allows us to explore an Android device and has features such as: Copy local files to device Reboot device Reboot device in to recovery mode Open files for viewing / execution locally with the default file type executable Package Manager (Install & Uninstall) Take a Screen Shot (landscape or portrait) etc Virtual Router – Wifi Hot Spot (Windows 7 / 2008 R2) Allows you to create an Access Point, sharing your local Internet connection, with other wireless devices. Its no doubt an application to be always installed, by the simple way to create a resource like this one.

    Read the article

  • monitor multiple work repositories in ODI11g EM

    - by tina.wang
    when you create a domain, by default it will let you specify master/work repository information. This work repository is automatically configured and be directly monitored in EM But your master repository may contain multiple work repositories, how to let EM monitor all them. 1)these work repositories must have been registered in your master repository 2)in weblogic console, generate generic data source for every work repository, eg: jdbc/mySecondWork 3)in odiconsole, create new repository connection for the every work repository, master jndi information is jdbc/odiMasterRepository by default OK, now you can see the work repository status is configured. Btw, there is a bug when the work repository is execution type.

    Read the article

  • Is Query Performance different for different versions of SQL Server?

    - by Ronak Mathia
    I have fired 3 update queries in my stored procedure for 3 different tables. Each table contains almost 2,00,000 records and all records have to be updated. I am using indexing to speed up the performance. It quite working well with SQL Server 2008. stored procedure takes only 12 to 15 minutes to execute. (updates almost 1000 rows in 1 second in all three tables) But when I run same scenario with SQL Server 2008 R2 then stored procedure takes more time to complete execution. its about 55 to 60 minutes. (updates almost 100 rows in 1 second in all three tables). I couldn't find any reason or solution for that. I have also tested same scenario with SQL Server 2012. but result is same as above. Please give suggestions.

    Read the article

  • SSL timeout on some sites, across all browsers, on Mac OS X Snow Leopard

    - by dansays
    For the past several weeks, I've been receiving "Error 7 (net::ERR_TIMED_OUT): The operation timed out" when I attempt to connect to either Twitter or Paypal via SSL. I get this specific error in Google Chrome, but the same problem occurs in both Safari and Firefox. Other sites work fine, and other computers on my network can access these two sites. I have no firewall settings that would prevent me from accessing these sites over port 443. I notice that both Twitter and Paypal both have "Verisign Class 3 Extended Validation SSL CA" certificates. It is unclear whether this is related to the problem. In an effort to troubleshoot, I attempted to open the test sites referenced on Verisign's root certificate support page, which worked fine. Just to be sure, I downloaded and installed the root package file and installed all included Verisign certificates. No joy. I feel like I've hit a dead end. Any ideas? Update the first: I also cannot connect to FedEx.com, who also has a Verisign Class 3 Extended Validation cert. Update the second: Aaaaaaand it fixed itself. I did nothing. Or, I did something that worked, but in a delayed fashion. Frustrating, but a win is a win. I'll take it.

    Read the article

  • Android/Java AI agent framework/middleware

    - by corneliu
    I am looking for an AI agent framework to use as a starting point in an Android game I have to create for a university research project. It has been suggested to me to use JADE, but, as far as I can tell, it's not a suitable framework for games (at least for my game idea) because it runs in a split-execution mode, and it needs an always-active network connection to a main host. What I want is just a little something to give me a headstart. I am willing to adjust the game's features to the framework because it's more of a mockup game, and the purpose is to compare the performance of a couple of agents in the game world. The game will be very simplistic, with a minimal UI that displays various stats about the characters in the game (so no graphics, no pathfinding). Thank you.

    Read the article

  • Coming Soon! Oracle Global Trade Management Solutions

    An exciting new solution offering, Oracle Global Trade Management helps companies manage the Has the complexity of worldwide trade compliance while also mitigating compliance risk and uncovering supply chain inefficiencies. Oracle Global Trade Management helps organizations lower operational costs and improve network efficiencies by created barriers and roadblocks in your business processes? Do you seek ways to mitigate compliance risk while at the same time find hidden cash in your supply chain? If these issues affect your company, tune in to hear how the new Oracle Global Trade Management solution automatinges and streamlininges cross-border transactions. as part of the Value Chain Execution suite and a native trade and transportation platform.

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • In-depth Coverage for Oracle Workflow

    - by Steven Chan (Oracle Development)
    I'm lucky to work with many talented people in the Applications Technology Group, and many of them contribute articles to this blog.  Some team members have their own blogs.  If you work with Oracle Workflow, here's one that you should be following: Oracle E-Business Suite - Workflow This blog is updated every few months by our development team with in-depth technical articles about Oracle Workflow-related topics.  For example, articles posted there include: Implementing a post-notification function to perform custom validation E-Business Suite Proactive Support - Workflow Analyzer Asynchronous Business Event Subscriptions - Troubleshooting Tips Oracle E-Busienss Suite RCD - Applications Technology Releases 12.1 and 12.2 SMTP Authentication Feature in R12.1.3 Configurable User LOV in Worklist UI Oracle Business Event and Subsciptions Execution Flow Understanding AQs in Workflow SSL in Oracle Workflow Leveraging Oracle Workflow for Declarative PageFlow If you have suggestions about Workflow topics that you'd like to see covered there, drop them a line.

    Read the article

  • Help me make a cronjob/screen command please?

    - by Josip Gòdly Zirdum
    Hi guys I want to set up a cronjob on reboot to do this cd /home/admin/vivalaminecraft.com && screen -d -m -S mcscreen && mono McMyAdmin.exe The issue is when I execute this it seems to create the screen but doesn't do the mono McMyAdmin.exe in the screen... Is there like a then command ? so it does 1. then 2. then 3. ? Could someone please help out :) So I tried this: so I did this: @reboot screen -dmS minecraft @reboot cd /home/admin/vivalaminecraft.com @reboot mono McMyAdmin.exe It still doesn't work. The screen is created but it doesn't have the mono execution in it I put this in it #!/bin/bash screen -dmS minecraft; cd /home/admin/vivalaminecraft.com; mono McMyAdmin.exe; is this correct?

    Read the article

  • Enter comments on queries in TraceTune

    - by Bill Graziano
    I’m trying to make TraceTune (and eventually ClearTrace) work the way I do.  My typical query tuning session goes like this: Run a trace and upload to TraceTune/ClearTrace Tune the slowest queries Goto 1 I might do this two or three times in one day and then not come back to it again for weeks or even months.  This is especially true for those clients that I only visit a few times per month.  In many cases I’ll look at a query, decide I can’t do much with it and move on.  I needed a way to capture that information. TraceTune now lets you enter a comment for a query.  It can be as simple or as complex as you like.  The comment will be shown inline with the execution history of that query. This should let you walk back through your history with a query and decide whether you should spend more time tuning it.

    Read the article

  • SO-Aware @ TechReady (Microsoft Event)

    - by SURESH GIRIRAJAN
    A session on SO-Aware is presented at Microsoft TechReady event this week check here for more details : http://tellagostudios.com/blog/so-aware-highlighted-microsoft-techready Check here for more details on SO-Aware and how to leverage within your enterprise if you’re using BizTalk Server, WCF Services and services build on Azure. It provides lot of capability such as: o    Centralized service repository o    Centralized configuration management o    Service testing o    Monitoring o    Transparent integration with technologies such as Visual Studio, BizTalk Server, Windows Server & Azure AppFabric among many others o    SO-Aware Test Workbench provides developers with a visually rich environment to model and control the execution of load and functional tests in a SOA infrastructure. This tool includes the first native WCF load testing engine allowing developers to transparently load test applications built on Microsoft's service oriented technologies such as WCF, BizTalk Server or the Windows Server or Azure AppFabric.

    Read the article

  • A Basic Thread

    - by Joe Mayo
    Most of the programs written are single-threaded, meaning that they run on the main execution thread. For various reasons such as performance, scalability, and/or responsiveness additional threads can be useful. .NET has extensive threading support, from the basic threads introduced in v1.0 to the Task Parallel Library (TPL) introduced in v4.0. To get started with threads, it's helpful to begin with the basics; starting a Thread. Why Do I Care? The scenario I'll use for needing to use a thread is writing to a file.  Sometimes, writing to a file takes a while and you don't want your user interface to lock up until the file write is done. In other words, you want the application to be responsive to the user. How Would I Go About It? The solution is to launch a new thread that performs the file write, allowing the main thread to return to the user right away.  Whenever the file writing thread completes, it will let the user know.  In the meantime, the user is free to interact with the program for other tasks. The following examples demonstrate how to do this. Show Me the Code? The code we'll use to work with threads is in the System.Threading namespace, so you'll need the following using directive at the top of the file: using System.Threading; When you run code on a thread, the code is specified via a method.  Here's the code that will execute on the thread: private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written."); } The call to Thread.Sleep(1000) delays thread execution. The parameter is specified in milliseconds, and 1000 means that this will cause the program to sleep for approximately 1 second.  This method happens to be static, but that's just part of this example, which you'll see is launched from the static Main method.  A thread could be instance or static.  Notice that the method does not have parameters and does not have a return type. As you know, the way to refer to a method is via a delegate.  There is a delegate named ThreadStart in System.Threading that refers to a method without parameters or return type, shown below: ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); I'll show you the whole program below, but the ThreadStart instance above goes in the Main method. The thread uses the ThreadStart instance, fileWriterHandlerDelegate, to specify the method to execute on the thread: Thread fileWriter = new Thread(fileWriterHandlerDelegate); As shown above, the argument type for the Thread constructor is the ThreadStart delegate type. The fileWriterHandlerDelegate argument is an instance of the ThreadStart delegate type. This creates an instance of a thread and what code will execute, but the new thread instance, fileWriter, isn't running yet. You have to explicitly start it, like this: fileWriter.Start(); Now, the code in the WriteFile method is executing on a separate thread. Meanwhile, the main thread that started the fileWriter thread continues on it's own.  You have two threads running at the same time. Okay, I'm Starting to Get Glassy Eyed. How Does it All Fit Together? The example below is the whole program, pulling all the previous bits together. It's followed by its output and an explanation. using System; using System.Threading; namespace BasicThread { class Program { static void Main() { ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); Thread fileWriter = new Thread(fileWriterHandlerDelegate); Console.WriteLine("Starting FileWriter"); fileWriter.Start(); Console.WriteLine("Called FileWriter"); Console.ReadKey(); } private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written"); } } } And here's the output: Starting FileWriter Called FileWriter File Written So, Why are the Printouts Backwards? The output above corresponds to Console.Writeline statements in the program, with the second and third seemingly reversed. In a single-threaded program, "File Written" would print before "Called FileWriter". However, this is a multi-threaded (2 or more threads) program.  In multi-threading, you can't make any assumptions about when a given thread will run.  In this case, I added the Sleep statement to the WriteFile method to greatly increase the chances that the message from the main thread will print first. Without the Thread.Sleep, you could run this on a system with multiple cores and/or multiple processors and potentially get different results each time. Interesting Tangent but What Should I Get Out of All This? Going back to the main point, launching the WriteFile method on a separate thread made the program more responsive.  The file writing logic ran for a while, but the main thread returned to the user, as demonstrated by the print out of "Called FileWriter".  When the file write finished, it let the user know via another print statement. This was a very efficient use of CPU resources that made for a more pleasant user experience. Joe

    Read the article

  • Folder redirection GPO doesn't seem to be working

    - by homli322
    I've been trying to set up roaming profiles and folder redirection, but have hit a bit of a snag with the latter. This is exactly what I've done so far: (I have OU permissions and GPO permissions over my division's OU.) Created a group called Roaming-Users in the OU 'Groups' Added a single user (testuser) to the group Using the Group Policy Management tool (via RSAT on Windows 7) I right-clicked on the Groups OU and selected 'Create a GPO in this domain, and Link it here' Added my 'Roaming-Users' group to the Security Filtering section of the policy. Added the Folder Redirection option, specifically for Documents. It is set to redirect to: \myserver\Homes$\%USERNAME%\Documents (Homes$ exists and is sharing-enabled). Right-clicked on the policy under the Groups OU and checked Enforced. Logged into a machine as testuser successfully. Created a simple text file, saved some gibberish, logged off. Remoted into the server with Homes$ on it, noticed that the directory Homes$\testuser was created, but was empty. No text file to be found. From what I've read, I did everything I aught to...but I can't quite figure out the issue. I had no errors when I logged off about syncing issues (offline files is enabled) or anything, so I can only imagine my file should have ended up up on the share. Any ideas? EDIT: Using gpresult /R, I confirmed the user is in fact part of the Roaming-Users group, but does not have the policy applied, if that helps. EDIT 2: Apparently you can't apply GPOs to groups...so I applied to users and used the same security filter to limit it to my test user. Nothing happens as far as redirection goes, but I now have the following error in the event log: Folder redirection policy application has been delayed until the next logon because the group policy logon optimization is in effect

    Read the article

  • Seizing the Moment with Mobility

    - by Divya Malik
    Empowering people to work where they want to work is becoming more critical now with the consumerisation of technology. Employees are bringing their own devices to the workplace and expecting to be productive wherever they are. Sales people welcome the ability to run their critical business applications where they can be most effective which is typically on the road and when they are still with the customer. Oracle has invested many years of research in understanding customer's Mobile requirements. “The keys to building the best user experience were building in a lot of flexibility in ways to support sales, and being useful,” said Arin Bhowmick, Director, CRM, for the Applications UX team. “We did that by talking to and analyzing the needs of a lot of people in different roles.” The team studied real-life sales teams. “We wanted to study salespeople in context with their work,” Bhowmick said. “We studied all user types in the CRM world because we wanted to build a user interface and user experience that would cater to sales representatives, marketing managers, sales managers, and more. Not only did we do studies in our labs, but also we did studies in the field and in mobile environments because salespeople are always on the go.” Here is a recent post from Hernan Capdevila, Vice President, Oracle Fusion Apps which was featured on the Oracle Applications Blog.  Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan Capdevila Vice President, Oracle Applications Development

    Read the article

  • Outside Operations in JD Edwards EnterpriseOne Manufacturing

    - by Amit Katariya
    Upcoming E1 Manufacturing webcasts   Date: March 30, 2010Time: 10:00 am MDTProduct Family: JD Edwards EnterpriseOne Manufacturing   Summary This one-hour session is recommended for functional users who would like to understand the Outside Operations process overview, including Setup, Execution and Troubleshooting.   Topics will include: Concept Setup in context of PDM, SFC, Product Costing, and Manufacturing Accounting Processing Troubleshooting   A short, live demonstration (only if applicable) and question and answer period will be included. Register for this session Oracle Advisor is dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Important links related to Webcasts Advisor Webcast Current Schedule Advisor Webcast Archived Recordings Above links requires valid access to My Oracle Support

    Read the article

  • Oracle????????????????~BI/DWH????????(1)

    - by Yusuke.Yamamoto
    Oracle Database ???BI???????????????????????????????????????????????????????????????????????????????????? Oracle Database 11gR2 ??????????????????? ??????????(03:54)????????????? BI????????????(Oracle Database ??????????) Enterprise Manager ??SQL??????????????????????? ?????????????????????????????? BI????????????(Oracle Database ???????????/In-Memory Parallel Execution ???) Enterprise Manager ??SQL???????????????????????? ??????????????Oracle GRID Center ??????????????? ????????????????????????? ????????DWH????????????????·???????? ???? ???????????????!! Oracle Database??????? ??????|??????????? Oracle Enterprise Manager|??????????? DWH(?????????)??·??

    Read the article

  • Low process priority problem

    - by Svepe
    I have just set Ubuntu 12.04 64bit with Cinnamon desktop and 3.5.0-030500 kernel on my new laptop with IvyBridge i7. I decided to test its performance by running a single threaded CPU-hungry program that I often use for camera calibration. Unfortunately, it ended up running much slower than I have ever expected. After some investigation it turned out that the program priority is automatically changed from normal to low which makes the program even slower. I have also noticed that all user programs such as Skype and Firefox are set to low priority. I tried manually resetting the priority to normal or even very high using the "renice" command, which works temporary until the kernel scheduler (I guess) resets the priority to low. Is this a normal behaviour and how can I overcome the problem with slowing down the execution? P.S. I also tried with the 3.2 kernel, but the problem is still present.

    Read the article

  • Are your merchandise systems limiting growth? Oracle Retail's Merchandise Operations Management could be the answer

    - by user801960
    In this video, Lara Livgard, Director of Oracle Retail Strategy, introduces Oracle Retail Merchandise Operations Management (MOM), a set of integrated, modular solutions that support buying, pricing, inventory management and inventory valuation across a retailer’s channels, countries, and business models. MOM is the backbone of successful retail operations, providing timely and accurate visibility across the entire enterprise and enabling efficient supply-chain execution driven by plans and forecasts. It's modular architecture facilitates tailored and high-value implementations, giving retailers the information they need in order to offer a quality customer experience through a truly integrated multi-channel approach. Further information is available on the Oracle Retail website regarding Merchandise Operations Management.

    Read the article

  • Is it legal to develop a game using D&D rules?

    - by Max
    For a while now I've been thinking about trying my hand at creating a game similar in spirit and execution to Baldur's Gate, Icewind Dale and offshoots. I'd rather not face the full bulk of work in implementing my own RPG system - I'd like to use D&D rules. Now, reading about the subject it seems there is something called "The License" which allows a company to brand a game as D&D. This license seems to be exclusive, and let's just say I don't have the money to buy it :p. Is it still legal for me to implement and release such a game? Commercially or open-source? I'm not sure exactly which edition would fit the best, but since Baldur's Gate is based of 2nd edition, could I go ahead an implement that? in short: what are the issues concerning licensing and publishing when it comes to D&D? Also: Didn't see any similar question...

    Read the article

  • Folder redirection GPO doesn't seem to be working

    - by user57999
    I've been trying to set up roaming profiles and folder redirection, but have hit a bit of a snag with the latter. This is exactly what I've done so far: (I have OU permissions and GPO permissions over my division's OU.) Created a group called Roaming-Users in the OU 'Groups' Added a single user (testuser) to the group Using the Group Policy Management tool (via RSAT on Windows 7) I right-clicked on the Groups OU and selected 'Create a GPO in this domain, and Link it here' Added my 'Roaming-Users' group to the Security Filtering section of the policy. Added the Folder Redirection option, specifically for Documents. It is set to redirect to: \myserver\Homes$\%USERNAME%\Documents (Homes$ exists and is sharing-enabled). Right-clicked on the policy under the Groups OU and checked Enforced. Logged into a machine as testuser successfully. Created a simple text file, saved some gibberish, logged off. Remoted into the server with Homes$ on it, noticed that the directory Homes$\testuser was created, but was empty. No text file to be found. From what I've read, I did everything I aught to...but I can't quite figure out the issue. I had no errors when I logged off about syncing issues (offline files is enabled) or anything, so I can only imagine my file should have ended up up on the share. Any ideas? EDIT: Using gpresult /R, I confirmed the user is in fact part of the Roaming-Users group, but does not have the policy applied, if that helps. EDIT 2: Apparently you can't apply GPOs to groups...so I applied to users and used the same security filter to limit it to my test user. Nothing happens as far as redirection goes, but I now have the following error in the event log: Folder redirection policy application has been delayed until the next logon because the group policy logon optimization is in effect

    Read the article

  • How to SET TIMING ON for parallel upgrades to 12c?

    - by Mike Dietrich
    Have you asked yourself how to get timings in an Oracle Database 12c upgrade for all statements? When you run the parallel upgrade via catctl.pl, the parallel upgrade Perl driving script in Oracle Database 12c, you may also want to get timings written in your logfile during execution. As catctl.pl does not offer an option yet the best way to achieve this is to edit the catupses.sql script in $ORACLE/rdbms/admin as this script will get called all time over and over again throughout all steps of theupgrade run. Just add these lines marked in RED to catupses.sql and start your upgrade: Rem =============================================Rem Call Common session settingsRem =============================================@@catpses.sql Rem =============================================Rem  Set Timing On during the UpgradeRem =============================================SET TIMING ON; Rem =============================================Rem Turn off PL/SQL event used by APPSRem =============================================ALTER SESSION SET EVENTS='10933 trace name context off'; -Mike PS: This may become the default in a future patch set

    Read the article

  • Navigant Consulting Implements Oracle's PeopleSoft Enterprise 9.1 to Integrate Financial and HR Information

    - by jay.richey
    Integration to Help Global Consultancy Increase Business Productivity and Streamline Operations Redwood Shores, Calif. - Dec. 15, 2010 "Our business is based on the seamless execution and expertise of our highly-trained consultants and we're always seeking ways to improve processes so they can focus on providing excellent client service," said Changappa Kodendera, CIO, Navigant Consulting. "Our phased implementation of Oracle's PeopleSoft Enterprise 9.1 will provide us with a solid technology foundation that we can rely on to support our global consulting business, with a scalable platform that facilitates further improvement." Read the press release Watch their video

    Read the article

  • DTLoggedExec 1.0.0.2 Released

    - by Davide Mauri
    These last days have been full of work and the next days, up until the end of july, will follow the same ultra-busy scheme. This makes the improvement of DTLoggedExec a little bit slower than what I desire, but nonetheless Friday I’ve been able to relase an updated version of the tool that fixes a bug and add a very convenient option to make even more straightforward the creationg of execution logs: [bugfix] Fixed a bug that prevented loading packages from SSIS Package Store [new] Added support for {filename} placeholder in both Data Flow Profiling and CSV Log Provider The added feature allow to generate DataFlow profile logs and CSV logs that has the same name of the package that generated them, es: DTLoggedExec.exec /FILE:”MyPackage.dtsx” /LPA:"FILE=C:\Log\{filename}_{date}_{time}.dtsCSVLog" Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ubuntu 10.04 + php + postfix

    - by mononym
    I have a server I am running: Ubuntu 10.04 php 5.3.5 (fpm) Nginx I have installed postfix, and set it to loopback-only (only need to send) The problem is it is not sending. if i issue (at command line): echo "testing local delivery" | mail -s "test email to localhost" [email protected] I get the email no problem, but through PHP it does not arrive. When I send it via PHP, mail.log shows: Mar 28 10:15:04 host postfix/pickup[32102]: 435EF580D7: uid=0 from=<root> Mar 28 10:15:04 host postfix/cleanup[32229]: 435EF580D7: message-id=<20120328091504.435EF580D7@FQDN> Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: from=<root@FQDN>, size=1127, nrcpt=1 (queue active) Mar 28 10:15:04 host postfix/local[32230]: 435EF580D7: to=<root@FQDN>, orig_to=<root>, relay=local, delay=3.1, delays=3/0.01/0/0.09, dsn=2.0.0, status=sent (delivered to maildir) Mar 28 10:15:04 host postfix/qmgr[32103]: 435EF580D7: removed any help appreciated, my main.cf file: smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = FQDN alias_maps = hash:/etc/aliasesalias_database = hash:/etc/aliases myorigin = /etc/mailname #myorigin = $mydomain mydestination = FQDN, localhost.FQDN, , localhost relayhost = $mydomain mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only virtual_alias_maps = hash:/etc/postfix/virtual home_mailbox = mail/

    Read the article

  • How to log kernel panics without KVM

    - by Spacedust
    My server is crashing and I can't find an answer why. It all started after my datacenter upgrade RAM from 16 GB to 32 GB. I also found such logs in dmesg - they've started to show itself just before the first kernel panic: EXT4-fs error (device md2): ext4_ext_find_extent: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_ext_remove_space: bad header/extent in inode #97911179: invalid magic - magic 5f69, entries 28769, max 26988(0), depth 24939(0) EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 20974: 8589 blocks in bitmap, 54896 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. EXT4-fs error (device md2): ext4_ext_split: inode #97911179: (comm pdflush) eh_entries 28769 != eh_max 26988! EXT4-fs (md2): delayed block allocation failed for inode 97911179 at logical offset 1039 with max blocks 1 with error -5 This should not happen!! Data will be lost EXT4-fs error (device md2): ext4_mb_generate_buddy: EXT4-fs: group 21731: 5 blocks in bitmap, 60762 in gd JBD: Spotted dirty metadata buffer (dev = md2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. My system is CentOS 5.8 64-bit with latest kernel 2.6.18-308.20.1.el5. How can I check what is the reason of kernel panic without having an access to the KVM ? I have told my datacenter admins to check the memory in the server.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >