Search Results

Search found 94339 results on 3774 pages for 'system data'.

Page 187/3774 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Fast Data: Go Big. Go Fast.

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • How to Visualize your Audit Data with BI Publisher?

    - by kanichiro.nishida
      Do you know how many reports on your BI Publisher server are accessed yesterday ? Or, how many users accessed to the reports yesterday, or what are the average number of the users accessed to the reports during the week vs. weekend or morning vs. afternoon ? With BI Publisher 11G, now you can audit your user’s reports access and understand the state of the reporting environment at your server, each user, or each report level. At the previous post I’ve talked about what the BI Publisher’s auditing functionality and how to enable it so that BI Publisher can start collecting such data. (How to Audit and Monitor BI Publisher Reports Access?)Now, how can you visualize such auditing data to have a better understanding and gain more insights? With Fusion Middleware Audit Framework you have an option to store the auditing data into a database instead of a log file, which is the default option. Once you enable the database storage option, that means you have your auditing data (or, user report access data) in your database tables, now no brainer, you can start visualize the data, create reports, analyze, and share with BI Publisher. So, first, let’s take a look on how to enable the database storage option for the auditing data. How to Feed the Auditing Data into Database First you need to create a database schema for Fusion Middleware Audit Framework with RCU (Repository Creation Utility). If you have already installed BI Publisher 11G you should be familiar with this RCU. It creates any database schema necessary to run any Fusion Middleware products including BI stuff. And you can use the same RCU that you used for your BI or BI Publisher installation to create this Audit schema. Create Audit Schema with RCU Here are the steps: Go to $RCU_HOME/bin and execute the ‘rcu’ command Choose Create at the starting screen and click Next. Enter your database details and click Next. Choose the option to create a new prefix, for example ‘BIP’, ‘KAN’, etc. Select 'Audit Services' from the list of schemas. Click Next and accept the tablespace creation. Click Finish to start the process. After this, there should be following three Audit related schema created in your database. <prefix>_IAU (e.g. KAN_IAU) <prefix>_IAU_APPEND (e.g. KAN_IAU_APPEND) <prefix>_IAU_VIEWER (e.g. KAN_IAU_VIEWER) Setup Datasource at WebLogic After you create a database schema for your auditing data, now you need to create a JDBC connection on your WebLogic Server so the Audit Framework can access to the database schema that was created with the RCU with the previous step. Connect to the Oracle WebLogic Server administration console: http://hostname:port/console (e.g. http://report.oracle.com:7001/console) Under Services, click the Data Sources link. Click ‘Lock & Edit’ so that you can make changes Click New –> ‘Generic Datasource’ to create a new data source. Enter the following details for the new data source:  Name: Enter a name such as Audit Data Source-0.  JNDI Name: jdbc/AuditDB  Database Type: Oracle  Click Next and select ‘Oracle's Driver (Thin XA) Versions: 9.0.1 or later’ as Database Driver (if you’re using Oracle database), and click Next. The Connection Properties page appears. Enter the following information: Database Name: Enter the name of the database (SID) to which you will connect. Host Name: Enter the hostname of the database.  Port: Enter the database port.  Database User Name: This is the name of the audit schema that you created in RCU. The suffix is always IAU for the audit schema. For example, if you gave the prefix as ‘BIP’, then the schema name would be ‘KAN_IAU’.  Password: This is the password for the audit schema that you created in RCU.   Click Next. Accept the defaults, and click Test Configuration to verify the connection. Click Next Check listed servers where you want to make this JDBC connection available. Click ‘Finish’ ! After that, make sure you click ‘Activate Changes’ at the left hand side top to take the new JDBC connection in effect. Register your Audit Data Storing Database to your Domain Finally, you can register the JNDI/JDBC datasource as your Auditing data storage with Fusion Middleware Control (EM). Here are the steps: 1. Login to Fusion Middleware Control 2. Navigate to Weblogic Domain, right click on ‘bifoundation…..’, select Security, then Audit Store. 3. Click the searchlight icon next to the Datasource JNDI Name field. 4.Select the Audit JNDI/JDBC datasource you created in the previous step in the pop-up window and click OK. 5. Click Apply to continue. 6. Restart the whole WebLogic Servers in the domain. After this, now the BI Publisher should start feeding all the auditing data into the database table called ‘IAU_BASE’. Try login to BI Publisher and open a couple of reports, you should see the activity audited in the ‘IAU_BASE’ table. If not working, you might want to check the log file, which is located at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/AdminServer-diagnostic.log to see if there is any error. Once you have the data in the database table, now, it’s time to visualize with BI Publisher reports! Create a First BI Publisher Auditing Report Register Auditing Datasource as JNDI datasource First thing you need to do is to register the audit datasource (JNDI/JDBC connection) you created in the previous step as JNDI data source at BI Publisher. It is a JDBC connection registered as JNDI, that means you don’t need to create a new JDBC connection by typing the connection URL, username/password, etc. You can just register it using the JNDI name. (e.g. jdbc/AuditDB) Login to BI Publisher as Administrator (e.g. weblogic) Go to Administration Page Click ‘JNDI Connection’ under Data Sources and Click ‘New’ Type Data Source Name and JNDI Name. The JNDI Name is the one you created in the WebLogic Console as the auditing datasource. (e.g. jdbc/AuditDB) Click ‘Test Connection’ to make sure the datasource connection works. Provide appropriate roles so that the report developers or viewers can share this data source to view reports. Click ‘Apply’ to save. Create Data Model Select Data Model from the tool bar menu ‘New’ Set ‘Default Data Source’ to the audit JNDI data source you have created in the previous step. Select ‘SQL Query’ for your data set Use Query Builder to build a query or just type a sql query. Either way, the table you want to report against is ‘IAU_BASE’. This IAU_BASE table contains all the auditing data for other products running on the WebLogic Server such as JPS, OID, etc. So, if you care only specific to BI Publisher then you want to filter by using  ‘IAU_COMPONENTTYPE’ column which contains the product name (e.g. ’xmlpserver’ for BI Publisher). Here is my sample sql query. select     "IAU_BASE"."IAU_COMPONENTTYPE" as "IAU_COMPONENTTYPE",      "IAU_BASE"."IAU_EVENTTYPE" as "IAU_EVENTTYPE",      "IAU_BASE"."IAU_EVENTCATEGORY" as "IAU_EVENTCATEGORY",      "IAU_BASE"."IAU_TSTZORIGINATING" as "IAU_TSTZORIGINATING",    to_char("IAU_TSTZORIGINATING", 'YYYY-MM-DD') IAU_DATE,    to_char("IAU_TSTZORIGINATING", 'DAY') as IAU_DAY,    to_char("IAU_TSTZORIGINATING", 'HH24') as IAU_HH24,    to_char("IAU_TSTZORIGINATING", 'WW') as IAU_WEEK_OF_YEAR,      "IAU_BASE"."IAU_INITIATOR" as "IAU_INITIATOR",      "IAU_BASE"."IAU_RESOURCE" as "IAU_RESOURCE",      "IAU_BASE"."IAU_TARGET" as "IAU_TARGET",      "IAU_BASE"."IAU_MESSAGETEXT" as "IAU_MESSAGETEXT",      "IAU_BASE"."IAU_FAILURECODE" as "IAU_FAILURECODE",      "IAU_BASE"."IAU_REMOTEIP" as "IAU_REMOTEIP" from    "KAN3_IAU"."IAU_BASE" "IAU_BASE" where "IAU_BASE"."IAU_COMPONENTTYPE" = 'xmlpserver' Once you saved a sample XML for this data model, now you can create a report with this data model. Create Report Now you can use one of the BI Publisher’s layout options to design the report layout and visualize the auditing data. I’m a big fan of Online Layout Editor, it’s just so easy and simple to create reports, and on top of that, all the reports created with Online Layout Editor has the Interactive View with automatic data linking and filtering feature without any setting or coding. If you haven’t checked the Interactive View or Online Layout Editor you might want to check these previous blog posts. (Interactive Reporting with BI Publisher 11G, Interactive Master Detail Report Just A Few Clicks Away!) But of course, you can use other layout design option such as RTF template. Here are some sample screenshots of my report design with Online Layout Editor.     Visualize and Gain More Insights about your Customers (Users) ! Now you can visualize your auditing data to have better understanding and gain more insights about your reporting environment you manage. It’s been actually helping me personally to answer the  questios like below.  How many reports are accessed or opened yesterday, today, last week ? Who is accessing which report at what time ? What are the time windows when the most of the reports access happening ? What are the most viewed reports ? Who are the active users ? What are the # of reports access or user access trend for the last month, last 6 months, last 12 months, etc ? I was talking with one of the best concierge in the world at this hotel the other day, and he was telling me that the best concierge knows about their customers inside-out therefore they can provide a very private service that is customized to each customer to meet each customer’s specific needs. Well, this is true when it comes to how to administrate and manage your reporting environment, right ? The best way to serve your customers (report users, including both viewers and developers) is to understand how they use, what they use, when they use. Auditing is not just about compliance, but it’s the way to improve the customer service. The BI Publisher 11G Auditing feature enables just that to help you understand your customers better. Happy customer service, be the best reporting concierge! p.s. please share with us on what other information would be helpful for you for the auditing! Always, any feedback is a great value and inspiration for us!  

    Read the article

  • SQL SERVER – Installing SQL Server Data Tools and SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. If you have installed SQL Server, but are missing the Data Tools or Reporting Services Double-click the SQL Server 2012 installation media. Click the Installation link on the left to view the Installation options. Click the top link New SQL Server stand-alone installation or add features to an existing installation. Follow the SQL Server Setup wizard until you get to the Installation Type screen. At that screen, select Add features to an existing instance of SQL Server 2012. Click Next to move to the Feature Selection page. Select Reporting Services – Native and SQL Server Data Tools. If the Management Tools have not been installed, go ahead and choose them as well. Continue through the wizard and reboot the computer at the end of the installation if instructed to do so. Configure Reporting Services If you installed Reporting Services during the installation of the SQL Server instance, SSRS will be configured automatically for you. If you install SSRS later, then you will have to go back and configure it as a subsequent step. Click Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools > Reporting Services Configuration Manager > Connect on the Reporting Services Configuration Connection dialog box. On the left-hand side of the Reporting Services Configuration Manager, click Database. Click the Change Database button on the right side of the screen. Select Create a new report server database and click Next. Click through the rest of the wizard accepting the defaults. This wizard creates two databases: ReportServer, used to store report definitions and security, and ReportServerTempDB which is used as scratch space when preparing reports for user requests. Now click Web Service URL on the left-hand side of the Reporting Services Configuration Manager. Click the Apply button to accept the defaults. If the Apply button has been grayed out, move on to the next step. This step sets up the SSRS web service. The web service is the program that runs in the background that communicates between the web page, which you will set up next, and the databases. The final configuration step is to select the Report Manager URL link on the left. Accept the default settings and click Apply. If the Apply button was already grayed out, this means the SSRS was already configured. This step sets up the Report Manager web site where you will publish reports. You may be wondering if you also must install a web server on your computer. SQL Server does not require that the Internet Information Server (IIS), the Microsoft web server, be installed to run Report Manager. Click Exit to dismiss the Reporting Services Configuration Manager dialog box. Tomorrow’s Post Tomorrow’s blog post will show how to create your first report using the Report Wizard. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • database design help for game / user levels / progress

    - by sprugman
    Sorry this got long and all prose-y. I'm creating my first truly gamified web app and could use some help thinking about how to structure the data. The Set-up Users need to accomplish tasks in each of several categories before they can move up a level. I've got my Users, Tasks, and Categories tables, and a UserTasks table which joins the three. ("User 3 has added Task 42 in Category 8. Now they've completed it.") That's all fine and working wonderfully. The Challenge I'm not sure of the best way to track the progress in the individual categories toward each level. The "business" rules are: You have to achieve a certain number of points in each category to move up. If you get the number of points needed in Cat 8, but still have other work to do to complete the level, any new Cat 8 points count toward your overall score, but don't "roll over" into the next level. The number of Categories is small (five currently) and unlikely to change often, but by no means absolutely fixed. The number of points needed to level-up will vary per level, probably by a formula, or perhaps a lookup table. So the challenge is to track each user's progress toward the next level in each category. I've thought of a few potential approaches: Possible Solutions Add a column to the users table for each category and reset them all to zero each time a user levels-up. Have a separate UserProgress table with a row for each category for each user and the number of points they have. (Basically a Many-to-Many version of #1.) Add a userLevel column to the UserTasks table and use that to derive their progress with some kind of SUM statement. Their current level will be a simple int in the User table. Pros & Cons (1) seems like by far the most straightforward, but it's also the least flexible. Perhaps I could use a naming convention based on the category ids to help overcome some of that. (With code like "select cats; for each cat, get the value from Users.progress_{cat.id}.") It's also the one where I lose the most data -- I won't know which points counted toward leveling up. I don't have a need in mind for that, so maybe I don't care about that. (2) seems complicated: every time I add or subtract a user or a category, I have to maintain the other table. I foresee synchronization challenges. (3) Is somewhere in between -- cleaner than #2, but less intuitive than #1. In order to find out where a user is, I'd have mildly complex SQL like: SELECT categoryId, SUM(points) from UserTasks WHERE userId={user.id} & countsTowardLevel={user.level} groupBy categoryId Hmm... that doesn't seem so bad. I think I'm talking myself into #3 here, but would love any input, advice or other ideas. P.S. Sorry for the cross-post. I wrote this up on SO and then remembered that there was a game dev-focused one. Curious to see if I get different answers one place than the other....

    Read the article

  • Where to store front-end data for "object calculator"

    - by Justin Grahn
    I recently have completed a language library that acts as a giant filter for food items, and flows a bit like this :Products -> Recipes -> MenuItems -> Meals and finally, upon submission, creates an Order. I have also completed a database structure that stores all the pertinent information to each class, and seems to fit my needs. The issue I'm having is linking the two. I imagined all of the information being local to each instance of the product, where there exists one backend user who edits and manipulates data, and multiple front end users who select their Meal(s) to create an Order. Ideally, all of the front end users would have all of this information stored locally within the library, and would update the library on startup from a database. How should I go about storing the data so that I can load it into the library every time the user opens the application? Do I package a database onboard and just load and populate every time? The only method I can currently conceive of doing this, even if I only have 500 possible Product objects, would require me to foreach the list for every Product that I need to match to a Recipe and so on and so forth every time I relaunch the program, which seems like a lot of wasteful loading. Here is a general flow of my architecture: Products: public class Product : IPortionable { public Product(string n, uint pNumber = 0) { name = n; productNumber = pNumber; } public string name { get; set; } public uint productNumber { get; set; } } Recipes: public Recipe(string n, decimal yieldAmt, Volume.Unit unit) { name = n; yield = new Volume(yieldAmt, unit); yield.ConvertUnit(); } /// <summary> /// Creates a new ingredient object /// </summary> /// <param name="n">Name</param> /// <param name="yieldAmt">Recipe Yield</param> /// <param name="unit">Unit of Yield</param> public Recipe(string n, decimal yieldAmt, Weight.Unit unit) { name = n; yield = new Weight(yieldAmt, unit); } public Recipe(Recipe r) { name = r.name; yield = r.yield; ingredients = r.ingredients; } public string name { get; set; } public IMeasure yield; public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable,IMeasure>(); MenuItems: public abstract class MenuItem : IScalable { public static string title = null; public string name { get; set; } public decimal maxPortionSize { get; set; } public decimal minPortionSize { get; set; } public Dictionary<IPortionable, IMeasure> ingredients = new Dictionary<IPortionable, IMeasure>(); and Meal: public class Meal { public Meal(int guests) { guestCount = guests; } public int guestCount { get; private set; } //TODO: Make a new MainCourse class that holds pasta and Entree public Dictionary<string, int> counts = new Dictionary<string, int>(){ {MainCourse.title, 0}, {Side.title , 0}, {Appetizer.title, 0} }; public List<MenuItem> items = new List<MenuItem>(); The Database just stores and links each of these basic names and amounts together usings ID's (RecipeID, ProductID and MenuItemID)

    Read the article

  • T-SQL (SCD) Slowly Changing Dimension Type 2 using a merge statement

    - by AtulThakor
    Working on stored procedure recently which loads records into a data warehouse I found that the existing record was being expired using an update statement followed by an insert to add the new active record. Playing around with the merge statement you can actually expire the current record and insert a new record within one clean statement. This is how the statement works, we do the normal merge statement to insert a record when there is no match, if we match the record we update the existing record by expiring it and deactivating. At the end of the merge statement we use the output statement to output the staging values for the update,  we wrap the whole merge statement within an insert statement and add new rows for the records which we inserted. I’ve added the full script at the bottom so you can paste it and play around.   1: INSERT INTO ExampleFactUpdate 2: (PolicyID, 3: Status) 4: SELECT -- these columns are returned from the output statement 5: PolicyID, 6: Status 7: FROM 8: ( 9: -- merge statement on unique id in this case Policy_ID 10: MERGE dbo.ExampleFactUpdate dp 11: USING dbo.ExampleStag s 12: ON dp.PolicyID = s.PolicyID 13: WHEN NOT MATCHED THEN -- when we cant match the record we insert a new record record and this is all that happens 14: INSERT (PolicyID,Status) 15: VALUES (s.PolicyID, s.Status) 16: WHEN MATCHED --if it already exists 17: AND ExpiryDate IS NULL -- and the Expiry Date is null 18: THEN 19: UPDATE 20: SET 21: dp.ExpiryDate = getdate(), --we set the expiry on the existing record 22: dp.Active = 0 -- and deactivate the existing record 23: OUTPUT $Action MergeAction, s.PolicyID, s.Status -- the output statement returns a merge action which can 24: ) MergeOutput -- be insert/update/delete, on our example where a record has been updated (or expired in our case 25: WHERE -- we'll filter using a where clause 26: MergeAction = 'Update'; -- here   Complete source for example 1: if OBJECT_ID('ExampleFactUpdate') > 0 2: drop table ExampleFactUpdate 3:  4: Create Table ExampleFactUpdate( 5: ID int identity(1,1), 3: go 6: PolicyID varchar(100), 7: Status varchar(100), 8: EffectiveDate datetime default getdate(), 9: ExpiryDate datetime, 10: Active bit default 1 11: ) 12:  13:  14: insert into ExampleFactUpdate( 15: PolicyID, 16: Status) 17: select 18: 1, 19: 'Live' 20:  21: /*Create Staging Table*/ 22: if OBJECT_ID('ExampleStag') > 0 23: drop table ExampleStag 24: go 25:  26: /*Create example fact table */ 27: Create Table ExampleStag( 28: PolicyID varchar(100), 29: Status varchar(100)) 30:  31: --add some data 32: insert into ExampleStag( 33: PolicyID, 34: Status) 35: select 36: 1, 37: 'Lapsed' 38: union all 39: select 40: 2, 41: 'Quote' 42:  43: select * 44: from ExampleFactUpdate 45:  46: select * 47: from ExampleStag 48:  49:  50: INSERT INTO ExampleFactUpdate 51: (PolicyID, 52: Status) 53: SELECT -- these columns are returned from the output statement 54: PolicyID, 55: Status 56: FROM 57: ( 58: -- merge statement on unique id in this case Policy_ID 59: MERGE dbo.ExampleFactUpdate dp 60: USING dbo.ExampleStag s 61: ON dp.PolicyID = s.PolicyID 62: WHEN NOT MATCHED THEN -- when we cant match the record we insert a new record record and this is all that happens 63: INSERT (PolicyID,Status) 64: VALUES (s.PolicyID, s.Status) 65: WHEN MATCHED --if it already exists 66: AND ExpiryDate IS NULL -- and the Expiry Date is null 67: THEN 68: UPDATE 69: SET 70: dp.ExpiryDate = getdate(), --we set the expiry on the existing record 71: dp.Active = 0 -- and deactivate the existing record 72: OUTPUT $Action MergeAction, s.PolicyID, s.Status -- the output statement returns a merge action which can 73: ) MergeOutput -- be insert/update/delete, on our example where a record has been updated (or expired in our case 74: WHERE -- we'll filter using a where clause 75: MergeAction = 'Update'; -- here 76:  77:  78: select * 79: from ExampleFactUpdate 80: 

    Read the article

  • Passing data between the VirtualBox Host and the Guest

    - by Fat Bloke
    Here's a good question: "How can you figure out the VM name from within the VM itself?" While this data is not automatically available, the general purpose, and very powerful VirtualBox "GuestProperty" APIs can be used from the host and guest to pass arbitrary data, in key/value pairs format, in and out of the guest. Note that this does require that the VirtualBox Guest Additions have been installed in the guest. To play with this, try using the "VBoxManage" command line on your VirtualBox host machine, and "VBoxControl" in the guest. Host syntax VBoxManage guestproperty get <vmname>|<uuid> <property> [--verbose] VBoxManage guestproperty set <vmname>|<uuid> <property> [<value> [--flags <flags>]] VBoxManage guestproperty enumerate <vmname>|<uuid> [--patterns <patterns>] VBoxManage guestproperty wait <vmname>|<uuid> <patterns> [--timeout <msec>] [--fail-on-timeout]   Guest syntax VBoxControl.exe guestproperty        get <property> [-verbose] VBoxControl.exe guestproperty        set <property> [<value> [-flags <flags>]] VBoxControl.exe guestproperty        enumerate [-patterns <patterns>] VBoxControl.exe guestproperty        wait <patterns>                                      [-timestamp <last timestamp>]                                      [-timeout <timeout in ms>  So to solve our problem above, we set the vm name in the Host system on an arbitrary key like this: $ VBoxManage guestproperty set "Windows 7 (x64)" /MyData/VMname "Windows 7 (x64)" And within the guest we can use: C:\Program Files\Oracle\VirtualBox Guest Additions>VBoxControl.exe guestproperty get /MyData/VMname Oracle VM VirtualBox Guest Additions Command Line Management Interface Version 4.1.14 (C) 2008-2012 Oracle Corporation All rights reserved. Value: Windows 7 (x64) The GuestProperty API is pretty powerful, so for the interested, get more info in the User Manual. - FB 

    Read the article

  • How to save data from model without any association in cakephp [on hold]

    - by Abhishek
    I have base model in that i use dataManipulation method for updation in my code ,so i want to save data in receipt, receiptline model also in OpeningBankStatement.but i create association for receipt and receiptline not for OpeningBankStatement. So i want to save data this OpeningBankStatement model without any association.my demo code is. Array ( [Receipt] => Array ( [ID] => 566 [ObjectType] => 84 [TXNName] => bbnm [TXNDate] => 03-06-2014 [BranchID] => 1 [Narration1] => 267 [Narration] => Cheque Received [ExecutiveID] => 805 [AccountType] => 104 [Account] => 68 [ReferenceNo] => [TXNCurrencyID] => 3 [ExchangeRate] => 1.00000 [ManualAdiustment] => 0 [RevisionNumber] => 1 [CompanyID] => 1 [Status] => 633 ) [ReceiptLine] => Array ( [0] => Array ( [TXNID] => 566 [LineNo] => 0 [LineType_072] => 429 [BranchID] => 1 [AccountID] => 68 [ContactID] => [Amount] => 0 [CancelAmount] => 0 [OpenAmount] => 0 [Narration] => Cheque Received [CreatedBy] => 229 [ModifiedBy] => 229 [CreatedDate] => 2014-06-03 00:00:00 [ModifiedDate] => 2014-06-03 00:00:00 [Status] => 1 [RevisionNumber] => 1 [RowState] => [tmpInstrumentDate] => ) [1] => Array ( [LineNo] => 0 [RowState] => 436 [TXNID] => 0 [BranchID] => 1 [ContactID] => [AccountID] => 68 [Narration] => Cheque Received [Amount] => 0 [RevisionNumber] => 1 [LineType_072] => 460 [CancelAmount] => 0 [OpenAmount] => 0 [Status] => 1 ) ) [OpeningBankStatement] => Array ( [ObjectType] => 131 [TXNSeries] => 1 [TXNNo] => 12345 [TXNName] => bbnm [TXNDate] => 03-06-2014 [CompanyID] => 1 [AccountID] => 68 [ExecutiveID] => 805 [Narration] => Cheque Received [ReferenceNo] => [ParentObjectType] => 84 [ParentTXNID] => 1 [CancelledBy] => 1 [CancelledDate] => 2014-02-02 [CancellationRemarks] => hfg [Status] => 1 [RevisionNumber] => 1 ) ) By any dyanamic model association or callback method it solve? suggest solution.

    Read the article

  • System Rescue CD on multiboot usb not working

    - by darkfeline
    I have a multiboot usb with System Rescue CD and GRUB2 on it. When I try to boot it, it tries to find systemrescuecd/sysrcd.dat, attempts to mount /dev/sr0 and all the partitions on /dev/sda before declaring cannot find systemrescuecd/sysrcd.dat on devices and dumping me onto a primitive shell. The relevant entries in grub.cfg: menuentry "SystemRescueCd 32bit" { linux /systemrescuecd/isolinux/rescuecd rootfs=/systemrescuecd subdir=systemrescuecd dostartx setkmap=us initrd /systemrescuecd/isolinux/initram.igz } menuentry "SystemRescueCd 64bit" { linux /systemrescuecd/isolinux/rescue64 rootfs=/systemrescuecd subdir=systemrescuecd dostartx setkmap=us initrd /systemrescuecd/isolinux/initram.igz } I think the problem is that System Rescue CD cannot see /dev/sdb, which is my usb, but I don't know where to begin to fix it. If it helps, I set up my USB with a utility called MultiSystem, which is like MultiISO for Linux.

    Read the article

  • Most basic, low power home surveillance system

    - by cbp
    I am thinking of setting up a simple but effective surveillance system for my house that is: Very low powered (preferably no PCs left running out of stand-by mode) Cheap. When motion (or sound) is detected, I would like it to: Send an email/phone alert to me Record and upload video to the web (in case they steal the camera) So I imagine a system where I leave a netbook PC in stand-by mode and have it woken up by a motion detector. This initiates software to send alerts and periodically upload recorded video to the web. The software part is easy for me, but I'm not really a gadget-man so I'd like some advice on using a motion sensor of some sort to wake up the PC. Does anyone have some good advice? I know there are a couple of questions dealing with this topic already (see here: http://superuser.com/questions/3054/looking-for-a-moderately-priced-home-surveillance-setup, and here: http://superuser.com/questions/2929/can-you-suggest-a-great-home-security-setup-anti-burglars-e-t-c) - I am seeking more specific information with this question.

    Read the article

  • System backup with Norton Ghost

    - by Mehper C. Palavuzlar
    I will upgrade my system from Windows Vista Home Premium (x64) to Windows 7 (x64). Before starting the upgrade process, I want to back up my current system with Norton Ghost. I have never used it before, so I need assistance to do that. At the moment, there is 139 GB used space by Vista and I have 1 TB external HD connected via USB. If you can tell me the step by step instructions about how to back up and how to restore if the upgrade somehow fails, I'll appreciate that. Thanks.

    Read the article

  • Windows wait command till a system time

    - by user53864
    I have made a batch script for backup recently. Somewhere in the middle of script I'll have to wait for some time to be reached and then resume the next line of the script. I've scheduled the script at 4:00PM and after the wait command the next line should start at exactly 5:30PM. I thought of using SLEEP command but it's not sure that the commands used before the wait command will end up at certain time(due to inconsistent file sizes) but it's sure that they will be done by 5:00 or 5:10 and next it should execute wait command which waits for certain system clock. I'm checking if there is any command that waits or sleeps until the time specified reaches the system time and resumes there after. Anybody came across this situation and how was that resolved?

    Read the article

  • Testdisk won’t list files for an ext4 partition inside a LVM inside a LUKS partition

    - by user1598585
    I have accidentally deleted a file that I want to recover. The partition is an ext4 partition inside an LVM partition that is encrypted with dm-crypt/LUKS. The encrypted LUKS partition is: /dev/sda2 which contains a physical volume, with a single volume group, mapped to: /dev/mapper/system And the logical volume, the ext4 partition is mapped to: /dev/mapper/system-home A # testdisk /dev/mapper/system-home will notice it as an ext4 partition but tells me that the partition seems damaged when I try to list the files. If I # testdisk /dev/mapper/system it will detect all the partitions, but the same happens if I try to list their files. Am I doing something wrong or is it a known bug? I have searched but haven’t found any clue.

    Read the article

  • General purpose ticketing/tech support system [closed]

    - by crazybyte
    Possible Duplicate: What’s your favorite ticketing system? I was wondering if somebody could recommend me a very user friendly or simple general purpose ticketing/tech support system. I need something that is web based, preferably open-sourced/free software implemented using PHP, Ruby, Ruby on Rails or Java (as back end) with MySQL or PostgreSQL as database engine. I need something that is not development management oriented or project management oriented like Eventum or similar (random example), something to which the user can connect open a tech support request and be able to follow it until is solved or dropped.I need it to be open-sourced to be able to modify it if there is a need or extend it. I tried a number of such systems available and I found that osTicket or eTicket is something that it's close to what I need, but the code is somewhat flaky and some of the features are working badly or behaving strangely. Any thoughts/advice where to find something similar? Thanks!

    Read the article

  • System randomly Crashing

    - by Hailwood
    Hi guys, Computer is running windows XP. seems to be crashing randomly, no pattern in its timing or activity to cause it. What could be causing this/how do I diagnose this? Event log shows Event Type: Error Event Source: System Error Event Category: (102) Event ID: 1003 Date: 22/02/2011 Time: 7:10:05 p.m. User: N/A Computer: YOUR-8ABC512DA0 Description: Error code 1000008e, parameter1 c0000005, parameter2 bf801b50, parameter3 ee118c44, parameter4 00000000. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 53 79 73 74 65 6d 20 45 System E 0008: 72 72 6f 72 20 20 45 72 rror Er 0010: 72 6f 72 20 63 6f 64 65 ror code 0018: 20 31 30 30 30 30 30 38 1000008 0020: 65 20 20 50 61 72 61 6d e Param 0028: 65 74 65 72 73 20 63 30 eters c0 0030: 30 30 30 30 30 35 2c 20 000005, 0038: 62 66 38 30 31 62 35 30 bf801b50 0040: 2c 20 65 65 31 31 38 63 , ee118c 0048: 34 34 2c 20 30 30 30 30 44, 0000 0050: 30 30 30 30 0000

    Read the article

  • Recovering with DDRescue Cannot Complete (write error: Read-only file system)

    - by c00lryguy
    I'm trying to recover a corrupt VDI using vdfuse to mount the VDI and using dd_rescue to rescue the borked partition. dd_rescue seems to be working fine but once it reached about half of the partition, it just STOPs and gives the following error: ddrescue: write error: Read-only file system Wait.. what? It suddenly turns the FS it is writing the recovered partition to into a read-only file system. Well... why? Will I never be able to finish this? What's going on?

    Read the article

  • I started getting a weird message "Encrypting file system - Back up your file encryption key"

    - by user22559
    Hello I started getting a strange message when I start my computer. An icon appears in the system tray, and a popup tells me "Encrypting file system - Back up your file encryption key". I know what EFS is, but I don't use it. To my knowledge, I don't have any encrypted files on my partition. I have searched using Total Commander on all the partitions for files that have the "encrypted" attribute, but I found nothing. So I don't have any encrypted files. Does anyone know what I did to get this message?

    Read the article

  • CPU's on Hyper-V host system is just idling, even though VM's are at full throttle

    - by Bjørn
    Hello, I have a server that is running Windows 2008 64 bit Hyper-V, with 8 gigs of RAM and Intel Xeon X3440 @ 2.53 Ghz, which gives me 8 logical cores in the performance monitor on the host system. I have set up three Virtual Machines, all running Windows 2008 32 bit. Build server, running Team City Staging server SQL Server, running SQL Server 2005 These three machines are running very sluggishly, they are at 100% cpu even though the host system is barely using any cpu at all, typically below 10% total. Could anyone please give some tips as to the best setup for CPU allocating? Should I have set each server to have two cores, or should I increase this number above the total number of cores on the host? What is a good number to set on the Virtual Machine Reserve and Virtual Machine Limit? Is 8 gigs of physical RAM insufficient for 3 VM's? Thanks for reading. :)

    Read the article

  • Install system-wide PEAR on Debian Lenny

    - by artvolk
    Good day! I've installed PEAR on Debian Lenny using apt-get install php-pear, it was installed in /usr/share/php When I try to install anything using pear install <package> the PEAR folder is created under current user home directory and separate copy of pear is installed there. I ended up by installing local copy of PEAR for one of the users like this: http://kuziel.info/log/archives/2006/04/01/Installation-of-local-PEAR-repository Is any way to tell pear to install packages to system-wide repository in /usr/share/php? What is the recommended way of using system-wide PEAR copy? Thanks in advance!

    Read the article

  • Disabling partition just for one OS on multi-boot system

    - by Emiswelt
    Hi Regarding to the solution there: http://serverfault.com/questions/36385/how-can-i-mount-a-hard-drive-as-read-only-on-windows-xp I have a system with three partitions. One runs windows 7, one runs windows XP and is for some experimental programming and testing. I don't want to mess up anything, so I am going to disable the windows 7 partition like described on the linked page above from windows XP to protect the operating system. When I do this, is the windows 7 partition only disabled for the running XP os, or is the windows 7 partition rendered unbootable? with best regards

    Read the article

  • linux ssh -X graphical applications will not start when system load is high

    - by Chrisv
    So I am using ssh -X to access a server. I am at a Xubuntu desktop accessing a Ubuntu server that is in the next room. Usually everything works fine, but when the system load gets high, any graphical applications I have freeze and fail to be restarted. This happens even if the process that is causing the high load has been niced to a low priority with "nice -n 19". And even though the system load is high, the command line works fine with no delay, and other applications I have running on the server (e.g. virtual machines) run fine. But any graphical application running through X dies. When the graphical applications fail they usually give out an error message that suggests a time-out. It seems that something connected to X has a low priority and times out. But what is it, and how does one fix it?

    Read the article

  • 64bit or 32bit Linux system?

    - by Milan Babuškov
    I have a server that has 4GB of RAM. On it, I have installation of 32bit Slackware Linux 12.1. Of course, it is not using all of 4GB of RAM. I'd soon like to increase the RAM to 8GB, and am looking for a way for the system to use it. The system is used as a database server and is under high load during the day. AFAICT, I have two options: stay with 32bit and rebuild the kernel and lose some performance. Or go with 64bit and reinstall everything. Looking at 64bit versions of Slackware, I could run -current or Slamd64. Now, on to the questions: Should I stay with 32bit or go with 64bit? If I go 64bit, should I use -current or Slamd64? P.S. I hope to get answers from someone actually using any of these configurations in production, not just copy/paste something I could find myself via Google.

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >