Search Results

Search found 314 results on 13 pages for 'dice'.

Page 10/13 | < Previous Page | 6 7 8 9 10 11 12 13  | Next Page >

  • How can I duplicate HBCD's XP boot loader with my MBR?

    - by Warpstone
    I'm stumped. I'm migrating a Win XP Lenovo T500 to an SSD: I copied the XP partition using EaseUS to the SSD. Aligned the boot sector using Gparted The MBR needs to be rebuilt (fair enough) However, all attempts to use the Windows Recovery console hang (both via a boot CD and even when the console was installed as a boot option). I've tried using a bunch of tools to rebuild/replace the MBR, but no dice. They all say the MBR has been fixed, but I cannot load Windows from the SSD. The HBCD's boot from windows option works just fine however. I'm confused as to what HBCD can do that my drive can't. How can I get that functionality on my SSD? Is it a MBR fix I can mirror? The SSD is extremely fast when I do use HBCD to boot up... but it would be nice to not need a token-based access to the machine! :) Note: I know, windows 7 may be worth a fresh install, but I'm trying to avoid the cost and hassle if possible.

    Read the article

  • Permissions Issue with Files Generated by PerfMon

    - by SvrGuy
    We are trying to implement some data logging to CSV files using a Data Collector Set in PerfMon (on a windows Server 2008R2 system). The issue we are running into is that we (seemingly) can't control the permissions being set on the log files created by perfmon. What we want is for the log files created by perfmon to have Everyone:F permissions (Full Control for Everyone). So, we have a directory structure setup where all logs go into a folder: c:\vms\PerfMonLogs\%MACHINENAME% (e.g. c:\vms\PerfMonLogs\EvaluationG2) In the above example, c:\vms\PerfMonLogs\EvaluationG2 has permissions Everyone:F (below is the icacls for this directory) EVALUATIONG2/ Everyone:(OI)(CI)(F) NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Performance Log Users:(OI)(R) When the data collector set runs, it creates new sub folders and files within c:\vms\PerfMonLogs\EvaluationG2, e.g. (C:\vms\PerfMonLogs\EVALUATIONG2\M11d26y2012N3) Each of these directories and files has the following permissions: M11d26y2012N3 NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Performance Log Users:(OI)(R) So these new folders and not simply inheriting permissions from the parent folder (don't know why). Now, we tried adding Everyone:F using the security tab on the collector set (No dice). Any ideas? How do we control the permissions on the log files generated by perfmon data collector set?

    Read the article

  • I (stupidly) converted a TrueCrypt encrypted disk to GPT in Disk Management: now TrueCrypt won't mount it

    - by asilentfire
    Backstory: After moving a Macrium Reflect disk image from my TrueCrypt external drive (with whole disk encryption) onto a unencrypted drive and using Windows PE with Macrium Reflect to restore my internal disk to the recovery image on the external unencrypted drive, my Windows 8 failed to boot. I then went back and also recovered the System Partition (looking now, it is currently EFI), but I still couldn't boot into my backup.. I was in a hurry to get online for something so I just did a clean install of Windows 8, without the backup.. After I installed Windows 8, I went into Disk Management out of curiosity to see if there were other partitions with Windows 8 that Macruim might have missed, and there is (by default) a Recovery Partition of 100MB. My memory of this is hazy, as I was trying to get up and running for an exam at 4 AM: Something in Disk Management prompted me to convert my encrypted external drive to GPT.. I have no idea why I did this, but I went ahead and allowed it to convert my TrueCrypt drive to GPT. Now, I can't mount the drive in TrueCrypt.. Disk Management sees it as Disk 1, Basic, and Unallocated. I tried converting it back to MBR with Disk Management, but no dice with TrueCrypt :( If I try to mount the disk in TrueCrypt I get the message: Incorrect password or not a TrueCrypt volume I should never have messed with a Truecrypt drive in Disk Management, but I did. I have important college work in that drive, and fear I have lost it forever. PLEASE HELP

    Read the article

  • Is it possible to use ffmpeg to trim off X seconds from the beginning of a video with an unspecified length?

    - by marcelebrate
    I need to trim the just the first 1 or 2 seconds off of a series of FLV recordings of varying, unspecified lengths. I've found plenty of resources for extracting a specified duration from a video (e.g. 30 second clips), but none for continuing to the end of a video. Both of these attempts just yield a copied version of the video, sans desired trimming: ffmpeg -ss 2 -vcodec copy -acodec copy -i input.flv output.flv ffmpeg -ss 2 -t 120 -vcodec copy -acodec copy -i input.flv output.flv The thought on the second one was: perhaps if I specified a length beyond what was possible, it'd just go to the end. No dice. I know it's not an issue with codecs or using seconds instead of timecode since the following worked a charm: ffmpeg -ss 2 -t 5 -vcodec copy -acodec copy -i input.flv output.flv Any other ideas? I'm open to using other (Windows-based) command line tools, however am strongly favoring ffmpeg since I'm already using it for thumbnail creation and am familiar with it. If it helps, my videos will all be under 2 minutes.

    Read the article

  • Proactive Support Sessions at OUG London and OUG Ireland

    - by THE
    .conf td { width: 350px; border: 1px solid black; background-color: #ffcccc; } table { border: 1px solid black; } tr { border: 0px solid black; } td { border: 1px solid black; padding: 5px; } Oracle Proactive Support Technology is proud to announce that two members of its team will be speaking at the UK and Ireland User Group Conferences this year. Maurice and Greg plan to run the following sessions (may be subject to change): Maurice Bauhahn OUG Ireland BI & EPM and Technology Joint SIG Meeting 20 November 2012 BI&EPM SIG event in Ireland (09:00-17:00) and OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 Profit from Oracle Diagnostic Tools Embedded in EPM Oracle bundles in many of its software suites valuable toolsets to collect logs and settings, slice/dice error messages, track performance, and trace activities across services. Become familiar with several enterprise-level diagnostic tools embedded in Enterprise Performance Management (Enterprise Manager Fusion Middleware Control, Remote Diagnostic Agent, Dynamic Monitoring Service, and Oracle Diagnostic Framework). Expedite resolution of Service Requests as you learn to upload output from these tools to My Oracle Support. Who will benefit from attending the session? Geeks will find this most beneficial, but anyone who raises Oracle technical service requests will learn valuable pointers that may speed resolution. The focus is on the EPM stack, but this session will benefit almost everyone who needs to drill deeper into Oracle software environments. What will delegates learn from the session? Delegates who participate in this session will learn: How to access and run Remote Diagnostic Agent, Enterprise Manager Fusion Middleware Control, Dynamic Monitoring Service, and Oracle Diagnostic Framework. How to exploit the strengths of each tool. How to pass the outputs to My Oracle Support. How to restrict exposure of sensitive information. OUG Ireland BI & EPM and Technology Joint SIG Meeting 20 November 2012 BI&EPM SIG event in Ireland (09:00-17:00) and OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 Using EPM-Specific Troubleshooting Tools EPM developers have created a number of EPM-specific tools to collect logs and configuration files, centralize configuration information, and validate a configured installation (Ziplogs, EPM Registry Editor, [Deployment Report, Registry Cleanup Utility, Reset Configuration Tool, EPMSYS Hostname Check] and Validate [EPM System Diagnostic]). Learn how to use these tools on your own or to expedite Service Request resolution. Who will benefit from attending the session? Anyone who monitors Oracle EPM environments or raises service requests will learn valuable lessons that could speed resolution of those requests. Anyone from novices to experts will benefit from this review of custom troubleshooting EPM tools. What will delegates learn from the session? Learn where to locate and start EPM troubleshooting tools created by EPM developers Learn how to collect and upload outputs of EPM troubleshooting tools. Adapt to history of changes in these tools across time and version. Learn how to make critical changes in configurations. Grzegorz Reizer OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 EPM 11.1.2.2: Detailed overview of new features and improvements in Financial Management products. This presentation is a detailed overview of new features and improvements introduced in Enterprise Performance Management 11.1.2.2 for Financial Management products (Hyperion Financial Managment, Hyperion Planning, Financial Close Management). The presentation will cover a number of new product features from recently introduced configurable dimensionality in HFM to new functionality enhancements in Planning. We'll close the session with an overview of upgrade options from earlier product releases.

    Read the article

  • Big Data – Basics of Big Data Analytics – Day 18 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the various components in Big Data Story. In this article we will understand what are the various analytics tasks we try to achieve with the Big Data and the list of the important tools in Big Data Story. When you have plenty of the data around you what is the first thing which comes to your mind? “What do all these data means?” Exactly – the same thought comes to my mind as well. I always wanted to know what all the data means and what meaningful information I can receive out of it. Most of the Big Data projects are built to retrieve various intelligence all this data contains within it. Let us take example of Facebook. When I look at my friends list of Facebook, I always want to ask many questions such as - On which date my maximum friends have a birthday? What is the most favorite film of my most of the friends so I can talk about it and engage them? What is the most liked placed to travel my friends? Which is the most disliked cousin for my friends in India and USA so when they travel, I do not take them there. There are many more questions I can think of. This illustrates that how important it is to have analysis of Big Data. Here are few of the kind of analysis listed which you can use with Big Data. Slicing and Dicing: This means breaking down your data into smaller set and understanding them one set at a time. This also helps to present various information in a variety of different user digestible ways. For example if you have data related to movies, you can use different slide and dice data in various formats like actors, movie length etc. Real Time Monitoring: This is very crucial in social media when there are any events happening and you wanted to measure the impact at the time when the event is happening. For example, if you are using twitter when there is a football match, you can watch what fans are talking about football match on twitter when the event is happening. Anomaly Predication and Modeling: If the business is running normal it is alright but if there are signs of trouble, everyone wants to know them early on the hand. Big Data analysis of various patterns can be very much helpful to predict future. Though it may not be always accurate but certain hints and signals can be very helpful. For example, lots of data can help conclude that if there is lots of rain it can increase the sell of umbrella. Text and Unstructured Data Analysis: unstructured data are now getting norm in the new world and they are a big part of the Big Data revolution. It is very important that we Extract, Transform and Load the unstructured data and make meaningful data out of it. For example, analysis of lots of images, one can predict that people like to use certain colors in certain months in their cloths. Big Data Analytics Solutions There are many different Big Data Analystics Solutions out in the market. It is impossible to list all of them so I will list a few of them over here. Tableau – This has to be one of the most popular visualization tools out in the big data market. SAS – A high performance analytics and infrastructure company IBM and Oracle – They have a range of tools for Big Data Analysis Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Data Scientist. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • EPM troubleshooting Utilities

    - by THE
    (in via Maurice) "Are you keeping up-to-date with the latest troubleshooting utilities introduced from EPM 11.1.2.2? These are typically not described in product documentation, so you might miss references to them. The following five utilities may be run from the command line.(1) Deployment Report was introduced with EPM 11.1.2.2 (11 April 2012). It details logical web addresses, web servers, application ports, database connections, user directories, database repositories configured for the EPM system, data directories used by EPM system products, instance directories, FMW homes, deployment distory, et cetera. It also helps to keep you honest about whether you made changes to the system and at what times! Download Shared Services patch 13530721 to get the backported functionality in EPM 11.1.2.1. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/epmsys_registry.sh report deployment (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\epmsys_registry.bat report deployment (on Microsoft Windows). The output is saved under \Oracle\Middleware\user_projects\epmsystem1\diagnostics\reports\deployment_report.html (2) Log Analysis has received more "press". It was released with EPM 11.1.2.3 and helps the user to slice and dice EPM logs. It has many parameters which are documented when run without parameters, when run with the -h parameter, or in the 'Readme' file. It has also been released as a standalone utility for EPM 11.1.2.3 and earlier versions. (Sign in to  My Oracle Support, click the 'Patches & Updates' tab, enter the patch number 17425397, and click the Search button. Download the appropriate platform-specific zip file, unzip, and read the 'Readme' file. Note that you must provide a proper value to a JAVA_HOME environment variable [pointer to the mother directory of the Java /bin subdirectory] in the loganalysis.bat | .sh file and use the -d parameter when running standalone.) Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/loganalysis.sh -h (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\loganalysis.bat -h (on Microsoft Windows). The output is saved under the \Oracle\Middleware\user_projects\epmsystem1\diagnostics\reports\ subdirectory. (3) The Registry Cleanup command may be used (without fear!) to clean up various corruptions which can  affect the Hyperion (database-based) Repository. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/registry-cleanup.sh (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\registry-cleanup.bat (on Microsoft Windows). The actions are described on the command line. (4) The Remove Instance Command is only used if there are two or more instances configured on one computer and one of those should be deleted. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/remove-instance.sh (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\remove-instance.bat (on Microsoft Windows). (5) The Reset Configuration Tool was introduced with EPM 11.1.2.2. It nullifies Shared Services Hyperion Registry settings so that a service may be reconfigured. You may locate the values to substitute for <product> or <task> by scanning registry.html (generated by running epmsys_registry.bat | .sh). Find productNAME in INSTANCE_TASKS_CONFIGURATION and SYSTEM_TASKS_CONFIGURATION nodes and identify tasks by property pairs that have values of 'Configurated' or 'Pending'. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/resetConfigTask.sh -product <product> -task <task> (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\resetConfigTask.bat -product <product> -task <task> (on Microsoft Windows). "Thanks to Maurice for this collection of utilities.

    Read the article

  • Blink-Data vs Instinct?

    - by Samantha.Y. Ma
    In his landmark bestseller Blink, well-known author and journalist Malcolm Gladwell explores how human beings everyday make seemingly instantaneous choices --in the blink of an eye--and how we “think without thinking.”  These situations actually aren’t as simple as they seem, he postulates; and throughout the book, Gladwell seeks answers to questions such as: 1.    What makes some people good at thinking on their feet and making quick spontaneous decisions?2.    Why do some people follow their instincts and win, while others consistently seem to stumble into error?3.    Why are some of the best decisions often those that are difficult to explain to others?In Blink, Gladwell introduces us to the psychologist who has learned to predict whether a marriage will last, based on a few minutes of observing a couple; the tennis coach who knows when a player will double-fault before the racket even makes contact with the ball; the antiquities experts who recognize a fake at a glance. Ultimately, Blink reveals that great decision makers aren't those who spend the most time deliberating or analyzing information, but those who focus on key factors among an overwhelming number of variables-- i.e., those who have perfected the art of "thin-slicing.” In Data vs. Instinct: Perfecting Global Sales Performance, a new report sponsored by Oracle, the Economist Intelligence Unit (EIU) explores the roles data and instinct play in decision-making by sales managers and discusses how sales executives can increase sales performance through more effective  territory planning and incentive/compensation strategies.If you are a sales executive, ask yourself this:  “Do you rely on knowledge (data) when you plan out your sales strategy?  If you rely on data, how do you ensure that your data sources are reliable, up-to-date, and complete?  With the emergence of social media and the proliferation of both structured and unstructured data, how do you know that you are applying your information/data correctly and in-context?  Three key findings in the report are:•    Six out of ten executives say they rely more on data than instinct to drive decisions. •    Nearly one half (48 percent) of incentive compensation plans do not achieve the desired results. •    Senior sales executives rely more on current and historical data than on forecast data. Strikingly similar to what Gladwell concludes in Blink, the report’s authors succinctly sum up their findings: "The best outcome is a combination of timely information, insightful predictions, and support data."Applying this insight is crucial to creating a sound sales plan that drives alignment and results.  In the area of sales performance management, “territory programs and incentive compensation continue to present particularly complex challenges in an increasingly globalized market," say the report’s authors. "It behooves companies to get a better handle on translating that data into actionable and effective plans." To help solve this challenge, CRM Oracle Fusion integrates forecasting, quotas, compensation, and territories into a single system.   For example, Oracle Fusion CRM provides a natural integration between territories, which define the sales targets (e.g., collection of accounts) for the sales force, and quotas, which quantify the sales targets. In fact, territory hierarchy is a core analytic dimension to slice and dice sales results, using sales analytics and alerts to help you identify where problems are occurring. This makes territoriesStart tapping into both data and instinct effectively today with Oracle Fusion CRM.   Here is a short video to provide you with a snapshot of how it can help you optimize your sales performance.  

    Read the article

  • iPhone Inspiration Stories...

    - by Luis Tovar
    So we have all heard of these overnight millionaires in the app store. We know it would be great, but somewhat unlikely to do ourselves. What I do know is, that learning and developing for the iPhone can sometimes be overwhelming. I continue to push on and learn more and more, even when my brain just cant take anymore I turn my mac on and read up on how to do new and different things I haven't done yet in my app yet. I would like to see a few post's of a common day developer's story and how their life has improved now that they are sitting up on the golden pedestal of an iphone developer. I look at these Dice.com jobs on think how nice it would be to be making that kind of money offered for an experienced iphone developer. Im on my way... and while I'm on that road i'd like to look over and see your story... Please include how you got into iPhone development and any apps you have out there and your success with them. Care to share?

    Read the article

  • VS 2010 VSTO Add in for EXCEL 2007 Won't load

    - by Erick
    Hi everyone, We have an application that is built with Excel as the front end using the Office object model. We were using a C++ shim to load it as a COM add in for Excel 2003, but I've updated it to use the latest VSTO for Excel 2007. I've also been using VS 2010 for the latest version. The problem is that everything works great on my dev machine in debugger mode as well as just launching Excel 2007, but I cannot get it to run on any other machine (my current target machine is Win7, development is XP). I've created a ClickOnce deployment of the Addin, and I can see it in the list of COM Addins, but when I check on it to load it nothing happens. I re-open the Addins manager and it is un-checked. I've also tried setting in in the registry, but as soon as I run it, it sets the registry back to do not load. I've tried everything I can think of and searched all over the web but no dice. Any help would be appreciated!

    Read the article

  • Xdebug: remote debugging won't stop at breakpoints

    - by Bryan M.
    I'm having a problem with xdebug not stopping at breakpoints when using remote debugging (everything is fine when running scripts via the command line). It will break at the first line of the program, then exit, not catching any breakpoints. It used to work fine, until I switched over to using MacPorts for Apache and PHP. I've tried re-compiling it serveral times (with several versions), but no dice. I'm using PHP 5.3.1 and Xdebug 2.1.0-beta3 I've also tried at least 3 different debugging programs (MacGDBp, Netbeans and JetBrains Web IDE). My php.ini settings look like: [xdebug] xdebug.remote_enable=1 xdebug.remote_handler=dbgp xdebug.remote_mode=req xdebug.remote_port=9000 xdebug.remote_host=localhost xdebug.idekey=webide And when I log the debugger output, setting a breakpoint looks like this/; <- breakpoint_set -i 895 -t line -f file:///Users/WM_imac/Sites/wm/debug_test.php -n 13 -s enabled -> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="breakpoint_set" transaction_id="895" state="enabled" id="890660002"></response> When run, the debugger will get the context of the first line of the application, then send the detach and stop messages. However, this line is output when starting the debugger. <- feature_get -i 885 -n breakpoint_types -> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="feature_get" transaction_id="885" feature_name="breakpoint_types" supported="1"><![CDATA[line conditional call return exception]]></response> Does 'line conditional call return exception' mean anything?

    Read the article

  • WPF derived ComboBox SelectedValuePath issue

    - by opedog
    In our application we have a very large set of data that acts as our data dictionary for ComboBox lists, etc. This data is staticly cached and keyed off of 2 variables, so I thought it wise to write a control that derived from ComboBox and exposed the 2 keys as DPs. When those 2 keys have proper values I set the ItemsSource of the ComboBox automatically from the data dictionary list it corresponds to. I also automatically set the SelectedValuePath and DisplayMemberPath to Code and Description, respectively. Here's how an example of how an item in the ItemsSource from the data dictionary list always looks: public class DataDictionaryItem { public string Code { get; set; } public string Description { get; set; } public string Code3 { get { return this.Get.Substring(0, 3); } } } The value of Code is always 4 bytes long, but sometimes I only need to bind 3 bytes of it. Hence, the Code3 property. Here's how the code looks inside my custom combobox to set the ItemsSource: private static void SetItemsSource(CustomComboBox combo) { if (string.IsNullOrEmpty(combo.Key1) || string.IsNullOrEmpty(combo.Key2)) { combo.ItemsSource = null; return; } List<DataDictionaryItem> list = GetDataDictionaryList(combo.Key1, combo.Key2); combo.ItemsSource = list; } Now, my problem is, when I change the SelectedValuePath in the XAML to Code3, it doesn't work. What I bind to SelectedValue still gets the full 4 character Code from DataDictionaryItem. I even tried rerunning SetItemsSource when the SelectedValuePath was changed and no dice. Can anyone see what I need to do to get my custom combobox to wake up and use the SelectedValuePath provided if it's overridden in the XAML? Tweaking the value in the property setter in my SelectedValue bound business object is not an option.

    Read the article

  • crystal reports : cannot determine the queries necessary to get data for this report

    - by phill
    I've recently come across the following error in one of my crystal reports following an accounting system update. Group #1: ? - A : This group section cannot be printed because its condition field is nonexistent or invalid. Format the section to choose another condition field. I've verified every database field being used to ensure it still exists for consistency and checked the formulas for that section. No dice. So to hopefully fix the problem, i remove the section using the Section expert. I run the same database checks. It then complains with the same error for Group #5. So i remove that as well. now I have a new and unusual error when i attempt to run 'Show Query'. the error is: "Cannot determine the queries necessary to get data for this report" I have tried to logon/logoff the database and verify database. No complaints until i try to run, show query. When I attempt to run the report, it also throws the same error. any ideas? Am I approaching this incorrectly? this is done in crystal reports 10. note: this report is run with the sql sa user to eliminate any permission issues.

    Read the article

  • Mac gcc non-virtual thunk error

    - by fret
    I'm getting these non-virtual thunk errors only in the Deployment build of my app. It uses a private framework called Lgi. Building on 10.5.8 using XCode 3.1.4 (latest for leopard?) The error looks like this: Ld /Users/matthew/Code/Scribe-Branches/v2.00/build/Development/Scribe.app/Contents/MacOS/Scribe normal i386 cd /Users/matthew/Code/Scribe-Branches/v2.00 /Developer/usr/bin/g++-4.0 -arch i386 -L/Users/matthew/Code/Scribe-Branches/v2.00/build/Development -F/Users/matthew/Code/Scribe-Branches/v2.00/build/Development -F/Users/matthew/Code/Lgi/build -F/Users/matthew/Code/Scribe-Branches/v2.00/../../Lgi/build/Development -F/Users/matthew/Code/Scribe-Branches/v2.00/../../Lgi/build/Development -F/Users/matthew/Code/Scribe-Branches/v2.00/../../Lgi/build/Deployment -F/Users/matthew/Code/Scribe-Branches/v2.00/../../Lgi/build/Development -F/Users/matthew/Code/Scribe-Branches/v2.00/../../Lgi/build/Deployment -filelist /Users/matthew/Code/Scribe-Branches/v2.00/build/Scribe.build/Development/Scribe.build/Objects-normal/i386/Scribe.LinkFileList -framework Carbon -framework Lgi -o /Users/matthew/Code/Scribe-Branches/v2.00/build/Development/Scribe.app/Contents/MacOS/Scribe Undefined symbols: "non-virtual thunk to GWindow::OnDrop(char*, GVariant*, GdcPt2, int)", referenced from: vtable for ScribeWndin ScribeApp.o vtable for GShutdownin ScribeApp.o vtable for CalendarUiin Calendar.o vtable for CalendarViewWndin CalendarView.o vtable for CalendarConfigin CalendarView.o vtable for ScribeExportin Exp_Scribe.o vtable for GNewMailDlgin GNewMailDlg.o ....etc for lots of classes.... Anyway I know I'm not leaving those undefined because it does in fact link and run fine in the development build. Now after googling the issue the first thing to try is changing the optimization setting, which I did... and no dice. Some link error. So these virtual functions are initially defined in GDragDropTarget, and GWindow's inheritance looks like this: class LgiClass GWindow : public GView #ifndef WIN32 , public GDragDropTarget #endif (LgiClass being for __declspec export/import on win32) Any ideas on what to try next? Maybe I need to provide more info.

    Read the article

  • How should I Test a Genetic Algorithm

    - by James Brooks
    I have made a quite few genetic algorithms; they work (they find a reasonable solution quickly). But I have now discovered TDD. Is there a way to write a genetic algorithm (which relies heavily on random numbers) in a TDD way? To pose the question more generally, How do you test a non-deterministic method/function. Here is what I have thought of: Use a specific seed. Which wont help if I make a mistake in the code in the first place but will help finding bugs when refactoring. Use a known list of numbers. Similar to the above but I could follow the code through by hand (which would be very tedious). Use a constant number. At least I know what to expect. It would be good to ensure that a dice always reads 6 when RandomFloat(0,1) always returns 1. Try to move as much of the non-deterministic code out of the GA as possible. which seems silly as that is the core of it's purpose. Links to very good books on testing would be appreciated too.

    Read the article

  • Data Driven MSTest: DataRow is always null

    - by David Back
    I am having a problem using Visual Studio data driven testing. I have tried to deconstruct this to the simplest example. I am using Visual Studio 2012. I create a new unit test project. I am referencing system data. My code looks like this: namespace UnitTestProject1 { [TestClass] public class UnitTest1 { [DeploymentItem(@"OrderService.csv")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "OrderService.csv", "OrderService#csv", DataAccessMethod.Sequential)] [TestMethod] public void TestMethod1() { try { Debug.WriteLine(TestContext.DataRow["ID"]); } catch (Exception ex) { Assert.Fail(); } } public TestContext TestContext { get; set; } } } I have a very small csv file that I have set the Build Options to to 'Content' and 'Copy Always'. I have added a .testsettings file to the solution, and set enable deployment, and added the csv file. I have tried this with and without |DataDirectory|, and with/without a full path specified (the same path that I get with Environment.CurrentDirectory). I've tried variations of "../" and "../../" just in case. Right now the csv is at the project root level, same as the .cs test code file. I have tried variations with xml as well as csv. TestContext is not null, but DataRow always is. I have not gotten this to work despite a lot of fiddling with it. I'm not sure what I'm doing wrong. Does mstest create a log anywhere that would tell me if it is failing to find the csv file, or what specific error might be causing DataRow to fail to populate? I have tried the following csv files: ID 1 2 3 4 and ID, Whatever 1,0 2,1 3,2 4,3 So far, no dice.

    Read the article

  • Eclipse randomly exits after installation of Blackberry plugin/SDK

    - by canadiancreed
    Hello all Since adding the Blackberry Java classes from their website into eclipse, I've had it where eclipse will randomly close, with no discernible pattern, rhyme, error or reason. Here is the environment/software packages that I am using: Windows XP SP2 Eclipse v3.5.1 Blackberry Java Plugin v1.1.1.200911111641-15 Blackberry Java SDK 4.5.0.21 I've tried the usual steps of complete uninstall and reinstallation of Eclipse and the accompanying plugins on multiple systems with the same configuration, including one that had a fresh install of Windows XP SP2. Upgrading to Eclipse 3.6 didn't work (the plugin wont' install as it's the wrong version), nor the downgrade to 3.4 for the same reason. I also increased the heap size to 512 (system has two gigs of memory) as some research into Eclipse doing this type of thing with Groovy was resolved that way, but again, no dice. Eclipse works great when the blackberry plugins are not installed, and no entries of errors or issues in the event log are helping to show what the issue with these plug-ins might be. So if anyone has ran into this issue, and even better, has a solution, I'd love to hear about it. Thanks in advance. EDIT: An additional to my issue, autoComplete with the Blackberry SDK seems to make this extremely unstable, like almost a guaranteed crash. Is this fixable at all?

    Read the article

  • Access VBA sub with form as parameter doesn't alter the form

    - by Ski
    I have a Microsoft Access 2003 file with various tables of data. Each table also has a duplicate of it, with the name '[original table name]_working'. Depending on the user's choices in the switchboard, the form the user choose to view must switch its recordsource table to the working table. I refactored the relevant code to do such a thing into the following function today: Public Sub SetFormToWorking(ByRef frm As Form) With frm .RecordSource = rst![Argument] & "_working" .Requery Dim itm As Variant For Each itm In .Controls If TypeOf itm Is subForm Then With Item Dim childFields As Variant, masterFields As Variant childFields = .LinkChildFields masterFields = .LinkMasterFields .Form.RecordSource = .Form.RecordSource & "_working" .LinkChildFields = childFields .LinkMasterFields = masterFields .Form.Requery End With End If Next End With End Sub The lines of code that call the function look like this: SetFormToWorking Forms(rst![Argument]) and SetFormToWorking Forms(cmbTblList) For some reason, the above function doesn't change the recordsource for the form. I added the 'ByRef' keyword to the parameter just to be certain that it was passing by reference, but no dice. Hopefully someone here can tell me what I've done wrong?

    Read the article

  • IBasic Accumulator

    - by Tara
    I am trying to do an accumulator in IBasic for a college assignment and I have the general stuff down but I cannot get it to accumulate. The code is below. My question is how do I get it to accumulate and pass to the different module. I'm trying to calculate how many right answers the user gets. Also, i need to calculate the percentage of right answers. so if the user gets 9 out of 10 right theyed answer 90% right. 'October 15, 2009 ' 'Lab 7.5 Programming Challenge 1 - Average Test Scores ' 'This is a dice game ' declare main() declare inputName(name:string) declare getAnswer(num1:int, num2:int) declare getResult(num1:int, num2:int, answer:int) declare avgRight(getRight:int) declare printInfo(name:string, getRight:int, averege:float) openconsole main() do:until inkey$<>"" closeconsole end sub main() def name:string def num1, num2, answer, total, getRight:int def averege:float inputName (name) getRight = 0 For counter = 1 to 10 getRight = getAnswer(num1, num2) getRight = getRight + 1 next counter average = avgRight (getRight) printInfo(Name, getRight, average) end sub inputName (name) Input "Please enter your name: " ,name return sub getAnswer(num1, num2) def answer, getRight:int num1 = rnd (10) + 1 num2 = rnd (10) + 1 Print num1, "+ " ,num2 Input "What is the answer to the equation? " ,answer getRight = getResult(num1, num2, answer) return getRight sub getResult(num1, num2, answer) def getRight:int if answer = num1 + num2 getRight = 1 else getRight = 0 endif return getRight sub avgRight(getRight) def average:float average = getRight / 10 return average sub printInfo(name, getRight, averege) Print "The students name is: " ,name Print "The number right is: " ,getRight Print Using ("&##.#&", "The average right is " ,averege * 100, "%") return

    Read the article

  • Database warehouse design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to seek some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • How to do fixed price quote for design sessions?

    - by Shaul
    Normally when I do a system for a customer, I do design sessions on an hourly rate and then come out with a fixed price quotation for the full system development. Now this customer has thrown me a curveball: he doesn't want an hourly rate for design, either - he wants me to quote a fixed price to do all the design, too! Not that he's trying to cheap out, but he doesn't want to be in a situation where the longer design stretches out, the more he has to pay - and I can understand that. For the business layer it was actually not too difficult to work with this, because from his original functional spec I got a good idea of what the core business objects were, and in our design agreement I defined several objects which would be covered by a fixed design price; if any new non-trivial objects were discovered, they would be considered variances, and those would be billed on an hourly rate. So far so good. But when it comes to the UI, things start getting a lot more woolly. How many screens will there be? Don't know yet. What's going to be on each screen? Don't know yet. All we know is that it's a "dashboard" type of system, and there will be a lot of visual reporting involved e.g. gauges, graphs, etc. So maybe make it fixed price per screen design? Not a great definition; he might say that everything is going to be on one screen. Maybe a price per "visual report" design, including ability to slice & dice? Again not so easy - it might be that the entire system is just one report, and all the intelligence is going to go into how to present that segmentation. Anyone have any ideas how to do a fixed price quotation for a UI design like this?

    Read the article

  • Powershell / .Net: Get a reference to an object returned by a method

    - by Dan Menes
    I am teaching myself PowerShell by writing a simple parser. I use the .Net framework class Collections.Stack. I want to modify the object at the top of the stack in place. I know I can pop() the object off, modify it, and then push() it back on, but that strikes me as inelegant. First, I tried this: $stk = new-object Collections.Stack $stk.push( (,'My first value') ) ( $stk.peek() ) += ,'| My second value' Which threw an error: Assignment failed because [System.Collections.Stack] doesn't contain a settable property 'peek()'. At C:\Development\StackOverflow\PowerShell-Stacks\test.ps1:3 char:12 + ( $stk.peek <<<< () ) += ,'| My second value' + CategoryInfo : InvalidOperation: (peek:String) [], RuntimeException + FullyQualifiedErrorId : ParameterizedPropertyAssignmentFailed Next I tried this: $ary = $stk.peek() $ary += ,'| My second value' write-host "Array is: $ary" write-host "Stack top is: $($stk.peek())" Which prevented the error but still didn't do the right thing: Array is: My first value | My second value Stack top is: My first value Clearly, what is getting assigned to $ary is a copy of the object at the top of the stack, so when I the object in $ary, the object at the top of the stack remains unchanged. Finally, I read up on teh [ref] type, and tried this: $ary_ref = [ref]$stk.peek() $ary_ref.value += ,'| My second value' write-host "Referenced array is: $($ary_ref.value)" write-host "Stack top is still: $($stk.peek())" But still no dice: Referenced array is: My first value | My second value Stack top is still: My first value I assume the peek() method returns a reference to the actual object, not the clone. If so, then the reference appears to be being replaced by a clone by PowerShell's expression processing logic. Can somebody tell me if there is a way to do what I want to do? Or do I have to revert to pop() / modify / push()?

    Read the article

  • Nhibernate , collections and compositeid

    - by Ciaran
    Hi, banging my head here and thought that some one out there might be able to help. Have Tables below. Bucket( bucketId smallint (PK) name varchar(50) ) BucketUser( UserId varchar(10) (PK) bucketId smallint (PK) ) The composite key is not the problem thats ok I know how to get around this but I want my bucket class to contanin a IList of BucketUser. I read the online reference and thought that I had cracked it but havent. The two mappings are below -- bucket -- <id name="BucketId" column="BucketId" type="Int16" unsaved-value="0"> <generator class="native"/> </id> <property column="BucketName" type="String" name="BucketName"/> <bag name="Users" table="BucketUser" inverse="true" generic="true" lazy="true"> <key> <column name="BucketId" sql-type="smallint"/> <column name="UserId" sql-type="varchar"/> </key> <one-to-many class="Bucket,Impact.Dice.Core" not-found="ignore"/> </bag> -- bucketUser --

    Read the article

  • Database warehoue design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to see some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • How does MSN filter spam?

    - by Marius
    I am trying to create a newsletter for our business. The last few days have been spent testing, and one of things I have noticed is that MSN seemingly randomly filters out some of my test messages. This is super-frustrating. I like the PEAR Mail MIME-package, and have been using that. I may send one email from one of our servers, resulting in the message getting through, and in the next minute, the same message sent from our other server ends up in the junk folder. Then if I add an attachment to the email, and the same message passes though the filter from the server that was previously blocked. I think. What the ####? Is this like throwing a dice, without me having any control over what is trash, and what isn't? I have sent email from several servers, all of which are shared. But I am unsure this is the problem. The problem is that it is seemingly random how MSN filters email. Some emails get through, and some other don't for seemingly irrational reasons. I am running out of ideas, but I am not giving up. Therefore I am writing to you for HARDCORE technical info on how MSN filters spam.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13  | Next Page >